This notion of “the creep factor” should be central to the future of privacy regulation. When companies use our data for our benefit, we know it and we are grateful for it. We happily give up our location data to Google so they can give us directions, or to Yelp or Foursquare so they can help us find the best place to eat nearby. We don’t even mind when they keep that data if it helps them make better recommendations in the future. Sure, Google, I’d love it if you could do a better job predicting how long it will take me to get to work at rush hour. And yes, I don’t mind that you are using my search and browsing habits to give me better search results. In fact, I’d complain if someone took away that data and I suddenly found that my search results weren’t as good as they used to be.
But we also know when companies use our data against us, or sell it on to people who do not have our best interests in mind. [...]
These people are privacy bullies, who take advantage of a power imbalance to peer into details of our private lives that have no bearing on the services from which that data was originally collected. Government regulation of privacy should focus on the privacy bullies, not on the routine possession and use of data to serve customers.
hmmm should think about this more, but this line of reasoning feels very naive. how does this handle power balances that can result from a company having all this data, which may not feel "creepy" to direct customers but could have ripple effects elsewhere? or is he just saying that it should be one tool