Facebook API and R – how to deal with time limit

I have already shared a great tutorial on how to use R and start getting data for an analysis from Facebook via its API (find an article here).

However, it covers only the basic issues of Facebook API and R. The actual fun starts with following steps. And all the little problems associated with R and Facebook API.

In this short article, I would like to show one way, how to deal with a time limit problem. The trick is probably very trivial, but for R users at a certain level, it might be useful.

The thing is that Facebook does not allow an unlimited access to its API. One problem is that it is not really clear, what are these limits. For example, if you get data from Twitter, you may do 15 scrapes per 15 minutes. If you read Facebook documentation or if you try to look for an answer on the internet, you would not find an “official” answer.

Unlike Twitter, there is no official information about a time limit for accessing Facebook API. Click To Tweet

No matter what is the official time limit, it is necessary to set a time limit. The solution here is to use the sys.sleep function.

If you use a loop for getting data from Facebook, you need to add this at the end of the loop, followed only by the last “}”.

And the actual trick is that you need to put it like this: “;Sys.sleep(2)”. The number indicates the number of seconds, during which the loop should wait.

The experience of other people is that it cannot be less than 1. In my experience, more than 1.7 was good.

In case that anyone who reads this article has a better solution for this problem, let me know. I would be very interested to hear it.

Leave a Reply

Your email address will not be published. Required fields are marked *