I think that this is a bit strong. 3 of Google execs charged concerning the content on their site in Italy even though this content was removed within 24 hours of notification. This brings to mind the question of how third-party providers can be expected to take responsibility for content uploaded by end-users… I believe that so long as the provider acts responsibilly by removing content as soon as notified, that this is the best they can do without creating some manual approval process. After all this content was the bullying of a disabled boy, which is really bad, but how do you identify this as inappropriate content using some sort of filtering technology, just not possible, it is not like pornographic material. I guess if bad language is used that this may work?