I have never worked in a call-centre, having sat at Level 2-3 IT support in my younger days, but I’ve worked very closely with first-line support and felt the pressure that they are under. Their challenge is process as many calls as possible in shortest time, yet maintain quality. This means that the driver to find ways to identify key performance indicators (KPIs) is strong for the business. The main KPI being length of call which is easy to measure, quality is more difficult.
This has lead to innovations on the tracking of call-centre employees by correlating the metrics on the calls they take with what the employee is doing on their computer screen. This brings us to the use of surveillance, which is not just cameras, but telecommunications made easier with voice over IP together with a piece of software to enable screenshots when the call is being taken.
Is this disproportionate? Me with my privacy hat, says YES. Although clearly to record these actions for a specific purpose, i.e. to train, and help improve quality, effectiveness, could be good. This is not only for providing a quality service, but also for the employees who want to be great at what they do!
Some great guidelines have been published by CNIL. It’s in French of course, but the translation is actually rather good.
Rewind to 1996 when I landed a job at Cern in Geneva and started a phase of my life which changed me forever. One of the exceptional engineers I met (Ivan) had configured his home into a primitive version of the ‘smart-home’ although it wasn’t called that then.
Everything was connected to a dashboard. He knew every time someone entered or left the house, every time someone visited the bathroom and how long. He had video connected which he could access from his PC. I think he had also programmed other functional aspects of the house, such as lighting, although I am not sure. What I do remember is how myself and my work colleagues, although impressed by his home were sceptical of the privacy implications. Myself and my female colleagues could not imagine living in a house whereby our partner knew every time a bathroom visit happened and for how long.
How short-sighted we were. I am now really for the first time taking a dive into smart-technologies in the home. I haven’t even started yet and am challenged to identify the controller-processor or controller-controller accountability? What data is shared with Google or Amazon, and who is accountable? I’ve looked around to see what other smart-home product vendors are writing on their privacy notices, and I have found nothing yet. The page from Google describes how their own products are working pertaining to privacy. But still nothing on what happens with 3rd-party cameras not populating the market. This blog Post is me brainstorming with myself.
Looking at the components of a smart-device, using a thermostat as our example: (1) thermostat, (2) Google Assistent/Alexis (3) App/code on smartdevice in Google Assistent/Alexis dashboard. So what is shared and where does it go?
(1) The termostat will have its own memory chip, enough to store and send data onward. In the old days data would be stored in a temporary cache, but nowadays, devices are never switched off, and the temporary cache is normally supported by a permanent memory on a hardware chip. Risk is if you sell the device and there is no hardware/factory reset button to clean the chip. This is not a high privacy risk as a thermostat is not highly sensitive data unless the temperature is set to unusually high which is the case when someone is sick or a new baby has arrived in the household. This could be quite an issue if the smart device is a camera, such as the incident with the Google Nest Indoor Cam.
(2) What is shared with Google Assistent/Alexis? The most publicity has been with the voice data collected and how it has been used. The most talked-about privacy invasive issues I’ve come across so far are (i) whereby background noise has been collected continuously, (ii) whereby the voice commands have been collected for the purpose of triggering some action, e.g. switch lights on, and in addition has been used by Google/Alexis to improve their voice recognition services without informing the users, i.e. lack of transparency in data collection and use practices.
(3) The App itself may collect other data in order to deliver the service, e.g. GPS/location data which is sent to the provider of the App, question if this flows via Google/Alexis? I guess so, as the device manufacturer is not creating their own App, they have created a piece of code to
What I see is that a smart-device which is connected to Google/Alexis is that the user is sharing their voice-data with Google/Alexis. This is not biometric data, it is voice-data. Voice-data which is not shared with the provider of the device. It is Google/Alexis which translated the voice-data into a digital format which the device can understand and act upon.
This means that (1) Google/Alexis needs to authenticate to the device App/code, which needs to provide just authorisation to send the instruction and nothing more, and (2) Google/Alexis are sharing instructions (from the voice-data) with the smart-device. (3) If (1) is done correctly, no data is sent from a 3rd-party smart-device to Google/Alexis.
I’m not sure that I’m missing anything here, but IMHO the risks to the provider of the smart-device are to ensure the code created to pop into the Google/Alexis hub/dashboard is purely to authenticate (2-way) and share a one-way instruction (from the hub to the device) on what the device must do?
Although I guess I’m forgetting here about the contextual data which is sent back to Google/Alexis hub in order to make decisions?
More on kids, and Sweden is ahead of the trend as is normal on children’s rights.
There is a new law (barnkonventionen svensk lag) being discussed which looks as though it will be effective in 2020 which basically means that parents are not permitted to post pictures of their children online without their permission.
This came to my notice following a Post I made on a private group on Facebook informing that it was against human rights and a right to a private life to Post pictures of children and any individual should not be posted without their permission. I made this Post because I was horrified (although not surprised) to find that someone had posted a video of a couple of teenagers on mopeds on the island (where I live) driving too fast, and was asking who they were. The culprits were uncovered. In main she was praised for stopping them, names were mentioned, until the mother popped up in the thread.
This reminded me of something which happens in China, a practice called ‘cyber manhunt‘. An individual does something bad, and a hunt is initiated to find him/her via social networks and other connected means, once found their life is made a misery.
In this closed group there were almost a 1000 members. So the 2 teenagers were publicly exposed. They did something wrong, but it doesn’t matter, they didn’t deserve public humiliation. I also wonder that if adults are posting these kind of videos online of kids, then clearly kids will not hesitate to do the same.. consequences can be fatal -if a child takes their life due to something posted on them to which they have not agreed to.
It is therefore, a delightful development, the new law which protects kids in the digital age, connected age. How this will work in practice, we will see. From a practical perspective, just wondering how an under 5 will be able to consent to their pictures being posted online. But I’m sure there is something in the legal text which covers this…
The question is that sometimes it is VERY useful to use tracking technologies, for example in order to protect vulnerable persons, i.e. small children, and old people (who tend to wander). So the decision by Norrköping kindergarten was a bad one IMHO to not allow the use of tracking – use of armband- of toddlers/small children.
As a parent it would give me peace of mind. Human rights states that we have a ‘right to feel safe’ and ‘a right to a private life’. These rights can often conflict with each other which results in the wrong decisions being made. Hence in fear of breaking the GDPR a school has made a rather incorrect decision which has so many benefits for all. What’s more is that RFID/sensors are not biometrics, so have no relation to the other decision. Sensors do not even need to be linked to an identity. All the school needs to know is if they have lost a child, not which one… that they can work out pretty quickly by seeing which they have.
This presents another problem in that decisions are made by persons who are are not able to take this careful balancing act and really identify the potential risk of harm to the natural person. In the case of Norrköping school I can see none which outweigh the benefits on a ‘right to feel safe’.
Thanks to Inge Frisk for bringing this decision in Norrköping to my attention.
The ruling is in Swedish, but to summarise the school was using facial recognition on its students. Facial recognition is biometric data, hence sensitive (special categories of data in the GDPR). They used consent as the legal basis but this was considered as unlawful due to the imbalance of relationship between the controller (school) and the data subject (student of 16+ yrs). Basically the student had no choice.
But there is more. The Swedish data protection authority based their decision on the following:
Art 5 – personal data collected was intrusive and more was collected that was needed for the purpose
Art 9 – the school did not have a legal exception to handle sensitive data. It is forbidden to collect sensitive data unless this is the case.
Art 35-36 – seems that a DPIA was not done.
What does this mean to other schools or even any public or private entity looking to use intrusive biometrics? Do a data protection impact assessment (DPIA), from here you will be able to get a clean picture on the potential risk of harm to the rights and freedoms of the data subject.
For me personally and professionally, I’m just happy that China’s big brother approach has been nipped in the bud here in Sweden 🙂
Thanks to Matt Palmer for bringing this article to my attention, and there has been some Twitter activity on this… but I’m not very active on Twitter… maybe I should..
Anyhow, the claim is that the GDPR was exploited to get personal data via rights exercised by the data subject, but in this case it was some researchers.
What went wrong here is that some companies did NOT verify the identity of the requester (data subject). This is different to authentication.
Authentication is where you provide credentials in order to be permitted access to an application, system, device, whatever. For example you probably use your finger-print to authenticate to your smartphone. However, this could be just a username and password. Authentication doesn’t necessarily prove you are who you say you are. Clearly your fingerprint can do this as it is ‘something you are’ but your username/password combination does not.
ID verification is when you need to provide evidence that you are who you say you are, a strong example is your driving licence of ID card when referencing SARs requests in the GDPR.
The question is how far do you need to go? The GDPR (Art 10) states that the controller should not need to collect additional personal data in order to comply. So this means that if you set up an account as firstname.lastname@example.org 6 months ago and nothing else was shared, e.g. Full name. Then what needs verification is that you are the same donald.duck who created the account. A full SAR Monty is not required.
In Sweden there has been defined somewhere, 4 levels of ID verification. The bottom 2 are based on the donald.duck example, the top 2 are based on a full ID check.
IMHO I think that companies are making it too difficult for the data subject to exercise their rights. In Sweden some companies are using a full ID check using something cool called BankID, and this works great, nice a simple and most people have this App loaded on their telephone!
Many organisations are requesting a copy of ID, driving license and even a utility bill, which is fine until you look at the insecure email channels over which ID verification is being sent over…. opps
An excellent blog post concerning guidelines from UK ICO on responding to SARs.
In short the important bits are:
You have a single month to respond to the SAR from the date of receipt until the same date the following month, if it’s the last day of the month, it is the last day of the following month.
Or/and a single month from date of ID verification
If the deadline falls on a non-working day, the deadline can be extended until the first working day the following week.
i.e. it is a SAR request even without the ID verify part. There is no point in deciding that you can wait 3 months to respond (1), and then the official SARs process only starts following ID verify (2).