Have Weekly Patterns been updated for the GraphQL API yet?

Have Weekly Patterns been updated for the GraphQL API yet?
Suddenly my script which had been working fine for pulling those from ‘https://placesapi.safegraph.com/v2/graphql’, is now only able to get 245 records for all of VA (vs 105924, from the week before).
I had been under the impression that that API was (closer to) the source, which ultimately supplies the bulk files in shop. It seems like the ones in shop have been updated for the week of March 7th, 2022 (from what I hear from @U01ER4QUR5X)
@U01UD1PV277 has always been helpful (but I assume often busy, as well). I wonder if that endpoint is in the process of becoming depreciated, and there’s a new one we ought to use for weekly patterns. I apologize if I missed any announcements (I’ll go look now).


This topic was automatically generated from Slack. You can find the original thread here.

Yes, this is true, but I have not confirmed that the number of core_places with patterns data is more than 245. Just to clarify, Brian and I are only looking at Virginia, not the full US.

Looping in @vchen since you should be able to access Patterns from the API

Thanks.
I see some examples (Programmatically Call the Places API) use the following enpoint url, instead.

https://api.safegraph.com/v2/graphql

But I get the same result as before . . . actually one more record now (246), with either endpoint.

For the short term, we were able to use the shop data, but we would prefer to use GraphQL if we can. Thanks!

I could share my GraphQL query again, but . . .
We really just want to do a bit of bulk download (Weekly Patterns for region: “VA”, to provide Virginia Department of Health with good targets to set up mobile vaccination units).
I think it may have been @vchen who implied previously that SafeGraph might/could have a url where (authenticated) folks might curl or wget bulk files.
Maybe it could be simple to pre-package patterns for each US region (state)? It seems like the on-demand packaging must take some server side processing time, which might be avoided with some basic preprocessing like this(?).
Of course if @Mandy_University_of_Virginia likes the GraphQL output/product, I am happy to use that. -as long as the API and endpoint are planned to be/remain stable.

Hey all, the bug fix for this is now live in production. You should be able to see the data resolve correctly if you run your queries again. Sorry for the inconvenience and please open a new thread if you find anything else that seems off!

@Michael_Gallagher_SafeGraph
I’m still only getting 246 records for VA.

INFO:__main__:start time: 2022-03-17T20:18:45.350153
INFO:__main__:total requests: 1   total records: 246
INFO:__main__:end time: 2022-03-17T20:18:47.112184
elapsed: 0:00:01.762031

Can you send me a copy of the request?

I’m using gql, etc., in a Jupyter notebook. Want me to share that?

Thanks! Ok, I was able to reproduce the issue. I will keep you posted about the progress on the fix.

If you need a quick work around while we work on a fix try using the places field:

query {
  search(filter: { address: { region: "VA" } }) {
    places {
      total_count
      results(first: 500 after: "UGxhY2U6MjIyLTIyNEA2M3ItdjJ0LW1jNSwwLjc5OTk0MzRfc2c6MDU4ZTY4YjgyYThkNGUxOTkwOTUxNzQ3ZmE0YmEyYTA=") {
        pageInfo {
          hasNextPage
          endCursor
        }
        edges {
          node {
            weekly_patterns(start_date: "2022-03-07", end_date: "2022-03-08") {
              placekey
              visits_by_day {
                visits
              }
            }
          }
        }
      }
    }
  }
}

Ok. Thanks @Michael_Gallagher_SafeGraph . . . and we might not really need this again until next week, as @Mandy_University_of_Virginia has kindly modified some scripts to use the shop download files, for this week.

So am I right in my thinking that “workaround” will be different, in that it will end up pulling a lot more records, because it will include places which don’t even have associated Weekly Patterns data?

Anyway, I fear you might just find some issues in that notebook just related to my fanciness with request rate-limiting and smart retries, of something like that. :slightly_smiling_face:
But thanks for looking into this!