My point here is to educate users, especially beginners who are most vulnerable (and scared) to misinformation.
One way to really secure your server from some potential abuse of url or file fetching (which both methods discussed here are used for) is to stop it in a higher layer above PHP. That means your server's firewall (not php-powered firewall, as it also might get compromised). Unless your server is badly configured to allow web user that controls/operate your server and php processes to write or delete files outside a public web folder (which would be a very bad and sad situation), there would be no way that some hacker who injected some php code or whatever pulls additional external files from an arbitrary malicious ip address. But, maintaining whitelists is painful, not to mention even experienced users avoid it, as IPs change over time, rare for big companies like Google, but... maybe through reverse dns check or something it is more reliable, but all that is outside php scope.
That is how I think about this security related. Now, I do not claim that this is correct, but it makes to me much more sense, because there is nothing built-in inside cURL that identifies and filters 'bad/unauthorized' requests from the 'good' ones.
Regards