<strong>The Responsibility Mindset</strong> With Artificial Intelligence
The key in getting the problem of data-farming by global companies right, is to make people understand what is happening right now. It’s not about Facebook and Google here — we’re right in the middle of a whole generation that is unable to cope with the topic of privacy.
With the super-fast growing world of digitalisation and artificial intelligence, users cannot grasp the impact of it to our privacy anymore. Michael P. Lynch puts it right in this interview worth reading:
“Lots of people don't grasp the core reasons why privacy matters. Perhaps this isn't surprising. It's fundamentally a philosophical problem.” — Michael P. Lynch
Currently, we have two main issues here:
The one is developers for whom the easiest solution is still to build a private, locked-down cloud-based solution. This is because it’s super hard to do both, a good product with all its complicated features, and good, secure and maintainable infrastructure. There’s currently no real solution available to build distributed infrastructures for Artificial Intelligence — you need a lot of computing power, you need a secure environment while you as software provider still need feedback and issue reporting for improving the solution. Look at cars like Tesla, BMW, Mercedes or whatever luxury cars with security and convenience features; they offer you great drive assistants that “learn” how to prevent car accidents. This is great, but on the other hand, they only learn because they send the data to the cloud so that engineers and the system itself can learn from previous issues and improve. I personally would not know how to do this in a locked-down-to-the-owner solution. Also, how do you do software updates in a decentralised solution? This is not any safer than a centralised system. The only thing a company could do here is to provide a solid privacy policy to users and to tell users that one does not store sensitive personal data on the same system that has the car driving information. But for the user, he/she still needs to fully trust the company that this is true and not to be changed in future.
For the users, it’s an even tougher problem than for the developers. Most users do not even understand why privacy matters.
“To be honest, I’m tending towards saying that most people shouldn’t need to understand privacy. I’d love to have this as a ‘manifested’ standard in every government.” — Anselm
Of course, people want to profit from the tech-boom that’s still going on and that is not only promising but really producing convenient solutions. Most of the solutions available for home automation are convenient products that support people and make their lives easier. A smart thermostat is great because it saves energy while putting off the thinking-about-it load from people. A device that is in your kitchen and lets you buy things you need, is convenient because you will not have forgotten about the empty soy sauce bottle next time you are in the supermarket to make your groceries. You simply have not forgotten because you directly told your smart device that you need it when you recognise it and it’ll add it to you digital shopping cart. A messenger that lets you connect to people you hardly know is a great thing because human beings are social and are happier when we talk to other people. I’ve made lots of friends through Facebook groups for example with people who have the same interests as I. Why should I as user be in charge of finding out what a company stores of my private data? Does it even matter as we read in newspapers that metadata is the most important value and that even ‘true end-to-end encrypted services’ still produce and send metadata that can be used to track me or build a profile.
“A.I. assistants can give you the news, order you a pizza, and tell you a joke. All you have to do is trust them—completely.” — Will Oremus
A.I. assistants, because of the reasons I stated above for the developers, are convenient. All you have to do is trust them—completely.