Smart-home devices: 3rd-party privacy risks

Rewind to 1996 when I landed a job at Cern in Geneva and started a phase of my life which changed me forever. One of the exceptional engineers I met (Ivan) had configured his home into a primitive version of the ‘smart-home’ although it wasn’t called that then.

Everything was connected to a dashboard. He knew every time someone entered or left the house, every time someone visited the bathroom and how long. He had video connected which he could access from his PC. I think he had also programmed other functional aspects of the house, such as lighting, although I am not sure. What I do remember is how myself and my work colleagues, although impressed by his home were sceptical of the privacy implications. Myself and my female colleagues could not imagine living in a house whereby our partner knew every time a bathroom visit happened and for how long.

How short-sighted we were. I am now really for the first time taking a dive into smart-technologies in the home. I haven’t even started yet and am challenged to identify the controller-processor or controller-controller accountability? What data is shared with Google or Amazon, and who is accountable? I’ve looked around to see what other smart-home product vendors are writing on their privacy notices, and I have found nothing yet. The page from Google describes how their own products are working pertaining to privacy. But still nothing on what happens with 3rd-party cameras not populating the market. This blog Post is me brainstorming with myself.

Looking at the components of a smart-device, using a thermostat as our example: (1) thermostat, (2) Google Assistent/Alexis (3) App/code on smartdevice in Google Assistent/Alexis dashboard. So what is shared and where does it go?

(1) The termostat will have its own memory chip, enough to store and send data onward. In the old days data would be stored in a temporary cache, but nowadays, devices are never switched off, and the temporary cache is normally supported by a permanent memory on a hardware chip. Risk is if you sell the device and there is no hardware/factory reset button to clean the chip. This is not a high privacy risk as a thermostat is not highly sensitive data unless the temperature is set to unusually high which is the case when someone is sick or a new baby has arrived in the household. This could be quite an issue if the smart device is a camera, such as the incident with the Google Nest Indoor Cam.

(2) What is shared with Google Assistent/Alexis? The most publicity has been with the voice data collected and how it has been used. The most talked-about privacy invasive issues I’ve come across so far are (i) whereby background noise has been collected continuously, (ii) whereby the voice commands have been collected for the purpose of triggering some action, e.g. switch lights on, and in addition has been used by Google/Alexis to improve their voice recognition services without informing the users, i.e. lack of transparency in data collection and use practices.

(3) The App itself may collect other data in order to deliver the service, e.g. GPS/location data which is sent to the provider of the App, question if this flows via Google/Alexis? I guess so, as the device manufacturer is not creating their own App, they have created a piece of code to

What I see is that a smart-device which is connected to Google/Alexis is that the user is sharing their voice-data with Google/Alexis. This is not biometric data, it is voice-data. Voice-data which is not shared with the provider of the device. It is Google/Alexis which translated the voice-data into a digital format which the device can understand and act upon.

This means that (1) Google/Alexis needs to authenticate to the device App/code, which needs to provide just authorisation to send the instruction and nothing more, and (2) Google/Alexis are sharing instructions (from the voice-data) with the smart-device. (3) If (1) is done correctly, no data is sent from a 3rd-party smart-device to Google/Alexis.

I’m not sure that I’m missing anything here, but IMHO the risks to the provider of the smart-device are to ensure the code created to pop into the Google/Alexis hub/dashboard is purely to authenticate (2-way) and share a one-way instruction (from the hub to the device) on what the device must do?

Although I guess I’m forgetting here about the contextual data which is sent back to Google/Alexis hub in order to make decisions?

Fine SEK200k on use of facial recognition in Swedish school

Finally some action in Sweden!

The ruling is in Swedish, but to summarise the school was using facial recognition on its students. Facial recognition is biometric data, hence sensitive (special categories of data in the GDPR). They used consent as the legal basis but this was considered as unlawful due to the imbalance of relationship between the controller (school) and the data subject (student of 16+ yrs). Basically the student had no choice.

But there is more. The Swedish data protection authority based their decision on the following:

  1. Art 5 – personal data collected was intrusive and more was collected that was needed for the purpose
  2. Art 9 – the school did not have a legal exception to handle sensitive data. It is forbidden to collect sensitive data unless this is the case.
  3. Art 35-36 – seems that a DPIA was not done.

What does this mean to other schools or even any public or private entity looking to use intrusive biometrics? Do a data protection impact assessment (DPIA), from here you will be able to get a clean picture on the potential risk of harm to the rights and freedoms of the data subject.

For me personally and professionally, I’m just happy that China’s big brother approach has been nipped in the bud here in Sweden 🙂

GDPR SAR exploit…. nah

Thanks to Matt Palmer for bringing this article to my attention, and there has been some Twitter activity on this… but I’m not very active on Twitter… maybe I should..

Anyhow, the claim is that the GDPR was exploited to get personal data via rights exercised by the data subject, but in this case it was some researchers.

What went wrong here is that some companies did NOT verify the identity of the requester (data subject). This is different to authentication.

Authentication is where you provide credentials in order to be permitted access to an application, system, device, whatever. For example you probably use your finger-print to authenticate to your smartphone. However, this could be just a username and password. Authentication doesn’t necessarily prove you are who you say you are. Clearly your fingerprint can do this as it is ‘something you are’ but your username/password combination does not.

ID verification is when you need to provide evidence that you are who you say you are, a strong example is your driving licence of ID card when referencing SARs requests in the GDPR.

https://www.dailyrecord.co.uk/whats-on/theatre-news/award-winning-show-full-monty-7838166

The question is how far do you need to go? The GDPR (Art 10) states that the controller should not need to collect additional personal data in order to comply. So this means that if you set up an account as donald.duck@gmail.com 6 months ago and nothing else was shared, e.g. Full name. Then what needs verification is that you are the same donald.duck who created the account. A full SAR Monty is not required.

In Sweden there has been defined somewhere, 4 levels of ID verification. The bottom 2 are based on the donald.duck example, the top 2 are based on a full ID check.

IMHO I think that companies are making it too difficult for the data subject to exercise their rights. In Sweden some companies are using a full ID check using something cool called BankID, and this works great, nice a simple and most people have this App loaded on their telephone!

Many organisations are requesting a copy of ID, driving license and even a utility bill, which is fine until you look at the insecure email channels over which ID verification is being sent over…. opps

SARs deadlines

An excellent blog post concerning guidelines from UK ICO on responding to SARs.

In short the important bits are:

  1. You have a single month to respond to the SAR from the date of receipt until the same date the following month, if it’s the last day of the month, it is the last day of the following month.
  2. Or/and a single month from date of ID verification
  3. If the deadline falls on a non-working day, the deadline can be extended until the first working day the following week.

i.e. it is a SAR request even without the ID verify part. There is no point in deciding that you can wait 3 months to respond (1), and then the official SARs process only starts following ID verify (2).

There, how difficult is that?

Cookies!

Cookies has always been a topical subject. If you are overweight and eating a cookie, ‘shame on you’, although the blue cookie monster, basically made cookies eating, in whatever way fashionable much to our relief. Although one could liken the way the cookie monster eats his cookies to the way cookies are haphazardly thrown onto our digital devices as though there are no rules.

However, there are rules, it is just they are not well understood, basically missing valuable guidance, in a non-technical way on ‘cookie management’. The .ICO has come up with some great detailed guidance, and have even implemented a super example of how cookies should be used. I am just wondering what type of coding was required to achieve this, because I know that the platform we are using doesn’t support this.

This brings me to another subject privacy by design. I often get asked the question, when I provide advice “but is this database GDPR compliant?”. I almost laugh, because we are in an in-between phase right now. Apart from ripping out what we have today and replacing with ‘state-of-the-art’, which the GDPR states is not necessary…It is only those platforms which have been built using privacy by design principles, of which there aren’t many right now, which are pure-bred GDPR-compliant. Clearly if your system is running on an operation system from the 1980s and doesn’t support encryption, and other security mechanisms, you could be having problems sleeping right now 😉

In main, the technology needs to be made good enough if certain mitigations are implemented, which are normally not technical, they are more to do with locking down processes and education of employees.

This is not a PbD approach, it is bolted on privacy, things can still go wrong if we need to depend on humans following processes. So until embedded privacy is the default in all technology, the beautiful privacy notice on the .ICO website, I guess I will need to wait for until I can get a technical guy in to make this happen for Privasee.

Unless anyone knows how they did it? I’ll love to know…

Digital discrimination is a reality whereby cash is no longer king

As with any form of discrimination, you are deprived of choice, and the right to choice is a human right.

The “cash is king” society is being replaced with digital money. What this means is that a large mass of individuals are marginalised because they don’t have money in the bank, but they may have money in their wallet.

Ha, you say now, it is only criminals which have something to hide? Well then that places me in a class of criminal in your mind: as I count the pennies in my wallet, -hoping that I have got a salary this month, to pay the faceless mr Taxman as much as I can, and take the rest out in cash- so I have enough money to pay for food for my family to survive.

When I first stepped into these new clothes of, what feels like a fugitive, I found that in Sweden, I couldn’t pay for the bus or buy a cup of coffee. Although I am learning to find out how to make it work, it is complicated. Hence I needed to work out out how to survive without money in the bank, even if I had cash in my wallet. I have opened my eyes up to a whole new world whereby cash is not king, whereby if you don’t have cash you are marginalised. Sweden is pretty advanced on the ‘cashless society’ vision.

It got me thinking again about bitcoin, not through my privacy eyes, but through the eyes of a marginalised individual, as a means as an alternative to money because ‘untraceable’ is built within its DNA, which I guess (as a non bitcoin guy) makes it an acceptable alternative to money.

Clearly bitcoin is a preferred currency for criminal networks because of this, and there are efforts to find a way to make bitcoin traceable to combat money laundering, and other shady stuff going on.

My marginalised hat hopes they don’t succeed. I hope that there is a future when the world is completely digitised that it is possible to survive when you have no money in your bank account. Today I can still find a coffee shop, and purchase metro/bus tickets at main stations, but tomorrow, I can’t imagine how it will be for those marginalised individuals and their families in a cashless world.

Seeing is believing?

One year on from GDPR enactment and the market has stabilised. The panic of 2018 settled down in 2019. So much so that consulting friends of mine in UK, Ireland, Denmark and Sweden have commented to me that there is less demand for pure privacy/GDPR consulting as in-house competences have matured. This is how it should be in order to achieve ‘data protection by design by default’ across every business function.

In Sweden it has become very laid back after a frenzied panic of 2018. No fines yet, although the Datainspektion is promising us some action during the next 12 months. Seeing is believing I say and the latest news on Klarna may change this. The Datainspektion needs to make an example of some organisation which is not compliant, and soon, or they will not be taken seriously. The latest news on fines has been in Denmark, to a taxi company. Each Supervisory Authority is accountable to enforcement of the GDPR for their respective countries. If they do not then they need to answer to the EU level… even they could be penalised, yes Datainspektion can be penalised!

What’s cool is how the businesses are on the road seem to understanding that GDPR is not a pure legal problem, it is the whole business, and as such engagement of privacy champions across every business function is happening which I find very exciting. In fact the more employees who get what this is about, the more likely it is that the organisation will succeed without feeling that it inhibits innovation, in fact quite the opposite!

If you want to get privacy champions in your organisation engaged at the right level, you can’t do much wrong to enroll them for the Privasee EAGLE online training, costs only €285 and it’s on a gamification platform so its actually fun!! Some of the larger organisations I’ve worked with have 100 spread out across every business function. IMHO every business should have at least one, if they have 5+ employees.

Sorry for the marketing plug here.. but Privasee needs to start making money on its products, and every training we sell helps us to continue the good work.. now we are challenged with cashflow during summer months…I’m a great privacy advocate and innovator but not interested in money per se. I wish I was then we wouldn’t have cashflow challenges.

If you love what we do, please either buy or recommend an EAGLE and get your privacy champions engaged in time for Autumn. Those who have done the training love it! If you do this please Comment or send a message so we can be sure to send a thank you token direct! If you want to resell Privacy products, we want to hear from you, unfortunately our ‘go-to-market’ sucks, reminding me of Novell who I worked for 7 years. We need to sell 100 EAGLEs to be flying again, if we sell 200 maybe I’ll be able to treat myself to haircut and a new pair of jeans 🙂

THANK YOU.