<img height="1" width="1" style="display:none" src="https://q.quora.com/_/ad/2c01aaceac6f41e0a0a6aabccd9eee2b/pixel?tag=ViewContent&amp;noscript=1">
Blog Image
Episode #002
31 min listen

The Legalities of Data Collection

with Peter Craddock
Partner of Keller and Heckman
Blog Image
“Personal data can be tempered in so many ways, and it could be unintentional. There are already cases that can be used as a case study to know when a situation is a case of data infringement.”
Peter Craddock, LinkedIn
Description

On this week’s episode of Ethical Data, Explained, Peter Craddock, Partner of Keller and Heckman, joins Henry Ng to discuss personal data and the consequences of personal data infringement. They discuss what it’s like working at Keller and Heckman; making people's data safe and secure; the role the DeFine tool plays in calculating fines for personal data Infringement; and the implications of the IAB Europe Transparency and Consent Framework case.

Transcript

Henry - 00:00:00: Welcome to Ethical Data, Explained. Join us as we discuss data related obstacles and opportunities with entrepreneurs, cybersecurity specialists, lawmakers, and even hackers to get a better understanding of how to handle data ethically and legally. Here to keep you informed in this data saturated world is your host, Henry Ng.

Henry - 00:00:19: Hello everyone, and welcome to Ethical, Data Explained. I'm your host, Henry NG, and today I'm joined with a partner from Keller and Heckman. His name is Peter Craddock, and I will pass over to him quickly just to give a quick introduction about himself and also a little bit about his journey from the background of data protection, cybersecurity, ecommerce, and software contracting. Over to you, Peter.

Peter - 00:00:40: Thank you, Henry. My name is Peter, and I'm actually a software developer who then moved into law. And so I started law as someone who didn't want to be a lawyer and then eventually found my calling, I think. But I basically started in a more intellectual property and distribution law kind of practice. And thanks to my more software related backgrounds, that really helped me to move into IT law. And then very rapidly after that when questions started to arise in data protection law. And so I've been doing that ever since. And so nowadays, basically, data protection is the vast majority of my practice and I've found the firm in which I can really grow that practice at a European level as well. And so together with American colleagues and people in Asia and so this is really I think I've managed to really develop a nice practice that has a breadth of different topics that we touch upon. We touch upon things like, well, the contractual work, the advisory work, and litigation as well. And so I'm pretty proud that we have one of the large litigation practices in the field. And so that's really nice because you get to have discussions with very bright lawyers on every side and also have interactions with the judges themselves and the authorities. And so you can try to, in a way, shape data protection law that way as well.

Henry - 00:02:03: Brilliant. That's great to hear. Surprisingly, you're not our first guest that's gone from software into law or law into software. I feel like there's a kind of synonymous nature between the two in some cases. But what would you say was your big draw to move from law into kind of the data protection side, cybersecurity side? Was there a catalyst that kind of drove you to that?

Peter - 00:02:22: Well, software for me was a hobby before I started my studies, and so I'd been doing that for some time, building my own websites and things like that. And then gradually I started to build legal technology. When data protection started to grow, at the beginning was one question every three months, and then gradually it became a whole lot more once the GDPR was adopted. And so from that moment onwards, they really felt like a very natural fit because I understand the technologies. And so that way I could really help to help clients to figure out not only legal solutions to technical problems, but also technical solutions to legal problems. And so it's a nice way being when you have the dual capacity or some understanding of the two aspects, then it really facilitates your work in discussing with clients as well.

Henry - 00:03:11: Definitely. So you become a specialist in kind of both worlds, and you drew them together. And from your background, and then from obviously our deep dive into your LinkedIn, we can see how successful you've been from both of those sides, just from your opinion and kind of your history. What's been the most high profile data protection and information technology case that you've worked on, and what was the results of it and what was the case about?

Peter - 00:03:33: Well, there's one very high profile one that I'm working on right now, which is the one regarding the IAB Europe Transparency and Consent Framework. So this is a case that's now before the Court of justice of the European Union. And I've already had another case, a couple of other cases before the Court of justice. But this is the one with, I think, the broadest impact, because this is something that has an impact on the whole online advertising world. And so that's a really nice high profile case to be involved in and where we're obviously trying to influence the way that things move from now on. And so it's really very exciting to be a part of this case in particular.

Henry - 00:04:14: And what route do you think this case obviously, I'm not saying you're psychic and can predict the future, but what route do you think the results will go towards for this case?

Peter - 00:04:23: Well, I'm hopeful that this result will not mean the end of online advertising as we know it, and instead will help us to basically help the whole online advertising industry evolve in the way that both the actors in that field. And also in a way that regulators are hopeful that it will lead to something that perhaps they understand better or that, in a way, evolves closer to their expectations. And so I'm sure there's lots that can be done, and it doesn't have to be resolved by basically a negative decision. So that's my hope.

Henry - 00:04:56: Amazing. Sounds like these high profile cases are coming kind of more and more frequent, especially with access to data and kind of how the Internet is kind of growing as a tool for not only businesses, but individual use speaking on the Internet. Obviously, we do like to do a little bit of snooping on our guests through their LinkedIn just to see what they're about in their background. And we came across a couple of posts from you. One of them was about a phishing call you received. So it would be great to know a little bit more about what the phishing call is about and how were you able to determine there was a phishing call and obviously security risks behind that?

Peter - 00:05:29: It's not my first phishing call and sometimes, depending on my mood either, I try to play around and play along with them and try to figure out how far they're willing to go in their explanation. Sometimes I'll just cut the call immediately, but this time I thought I'd play along for a bit. And it was actually fascinating to me because this was the most convincing phishing call I'd had in a while where they really tried to basically tell me that I made an Amazon purchase for an iPhone and it was an unusual transaction for me. And so I said, yeah, well, I'm sorry, but I'm not interested. Thank you. Basically then they said, yeah, but we found it was fraudulent because it was an order place from London and I live in Belgium and so obviously this would not make sense and so they were trying to elicit a response on my part of basically fear and susceptibility to basically be influenced. But I thought that the way that they responded to my questions was a lot smarter than what I'd seen until so far. I thought it was noteworthy. I don't know why data is available every now and again through data breaches. And so sometimes you do get your spamming emails or your fraud phone calls, but this one because it was so from a spamming perspective, it seemed like a very big coincidence that was happening just after a massive leak had been suggested, or had been reported rather in relation to what's happened. So I wondered perhaps it's related to that, perhaps they've started to pick some targets based on that. But I thought it was a really nicely scripted interaction as well.

Henry - 00:07:04: Feel like phishing emails and phishing calls have come a long way from you have a distant relative in a different country that has inheritance for you.

Peter - 00:07:11: Yeah, you know, I was playing around with Chat GPT the other day and I read through their terms of use and they said you cannot use that to generate spam and so on. But so I then tried to generate a letter coming from a Nigerian prince and so on and the result was really nice and so I'm wondering whether that will be the basis subscriptions.

 

Henry - 00:07:28: Future moving forward from that really. But obviously we want to protect the listeners from these types of phishing calls. So do you have any advice on what you should do to avoid them or what you should do after receiving a phishing call like this?

Peter - 00:07:40: Well, after receiving a phishing call, basically it's always worthwhile trying to figure out, can I identify where this could have come from? And so I like to use tools like Have I Been Pwned? And so on to so I can figure out where might my data be covered by a breach because that is a fantastic resource from that perspective, but how to identify them? In practice, you should always be skeptical whenever you get an email, whenever you get a phone call, always try to figure out, does this request make sense in relation to the context in which I'm living, the people I've been in touch with, and so on. You should always have a little voice inside your head saying, this could be a scam. Now, it's not an ideal way to live. You shouldn't be untrustworthy of everyone, but you do have to always try to keep an eye out for common sense mistakes, and you always have to be a bit vigilant, not too vigilant, you don't want to become paranoid. But so it's things like, well, does it make sense that someone would call me? And if it's, for instance, an Amazon caller, well, if you have an Amazon account, it's very simple for you to go look in your order history. And so you can always do your own verifications without entrusting someone else with information concerning.

Henry - 00:08:52: That's a really interesting topic. Because what I was going to move on to in terms of my next question was the cases you've expressed interest in before, according to your LinkedIn they’re kind of defining personal data. Could you go a little bit more into the case and kind of if you were presiding over a similar case, would you draw to the same type of conclusion on that?

Peter  - 00:09:11: That particular case was a case concerning OLAF, basically an agency of a European Union, and it concerned the specific publication that had been done on there regarding possible misuse, I believe, over grants or something like that. And so there had been a publication by OLAF on the website of the European Commission about something that had been going on, and they hadn't identified the individual who was covered by the investigation, but they provided some information, and purely on the basis of that information, you weren't able to immediately identify the person who was covered by this investigation. But you could, if you wanted to do it, you could actually go online and do some searches and do some cross referencing, comparing websites, and finally you'd be able to identify who was that person. But there was some degree of effort that was actually needed to get to that point to be able to identify this person. And so this was the General Court, so it wasn't the highest part of the Court of justice itself, but it was basically their first organ, which deals with notably cancellation proceedings against certain administrative acts. And so here before the General Court, the judges were asked to examine, is this processing of personal data? And if so, is it unlawful? And so on. And they actually said that the publication itself, which didn't contain the name of the individual, and which didn't provide sufficient information purely on that basis to enable me as a reader to identify that person. I needed to take additional steps, I needed to do some additional searches, and I needed to really use my brain to figure out what the links were between these different things. Then I'd be able to identify the person. And so the general court said that's not sufficient in this particular context with this information. It wasn't actually that publication itself did not contain personal data. And so in this context, there was no processing of personal data in the format of the publication itself. And it is an interesting case because it is relevant to a number of situations. And I do have a few ongoing cases where we are addressing the issue of what is personal data. And you'd think that, you know, this is data protection legislation has been around for basically 30 years now. You'd think that some of these basic concepts have been sufficiently examined, but it's not yet the case. And so we still do not have sufficient clarity. That means that all authorities and all controllers and processors interpret it in the same way. So we still do have a tension. Some authorities are interpreting it in a very broad manner, some in a more restrictive manner. So it's a fascinating topic, I think, because there are lots of situations. Think about cookie identifiers, think about the information in your shopping cart when you are online. Think about an IP address. There's been a case about an IP address, but people still assume that because of that case, IP addresses are automatically personal data, which is not what the court said in that case. And so there are lots of situations where we are using information that actually is not yet personal data. I like to use the concept of potential personal data. That's something that's potentially personal data, because I might be able to identify someone on that basis if I get additional data and I know what kind of additional data I need. But there are lots of situations where I just don't know. And so then there's going to be the question, should the GDPR and all data protection legislation basically if we think about a similar context worldwide, should it apply? And there are lots of cases where the answer actually should be no. In this particular context, I have no way of knowing, and I have no way of asking someone who might know and getting that information from them, then I shouldn't be shooting it as personal data. So there are lots of interesting considerations related to this particular case and ongoing cases that have to deal with that concept.

 

Henry - 00:13:13: Definitely sounds like a bit of a maze when it comes to trying to define personal data. You obviously raised the point of GDPR, and Keller and Heckman recently presented their DeFine tool. So I'm not going to pretend that I know everything about fines within GDPR. So it would be best to maybe get an idea of what the DeFine tool does. And why you kind of came out with the DeFine tool in the first place.

Peter - 00:13:35: So I've been building tools to help myself and help my clients for some time. And I love it if a client says, I've got an issue and can you help me? And then if I find out that it's something that they can do on their own, if I give them a tool, if it's a more rudimentary question, or if it's a specific process where my added value is more limited, then my added value is building the tool for them. And so then they've got something, you empower the clients and then they can do the calculation or the assessment on their own. And that, I think, is a great way to help them. And so with fines, there is a lot of uncertainty. How will my authority in my country potentially find me if I commit this infringement? Whenever a company gets a request for information from the supervisory authority about what's happening with their data, one of the first questions they have in mind is, could this lead to a fine? And if so, how much? Because that has a huge impact on how they're going to deal with that particular request. Also, whenever you have a new initiative, when you're thinking about something that could be a bit risky, it's always important to do a cost benefit analysis. And without clear guidance about what the fines could be, sometimes it's difficult to make that assessment and to say I'm going to go for this or I'm not going to go for this, because the risk is too great. So fines are even though it's still the negative side of data protection rules, and in particular the GDPR, it's still something that's really important for organizations to take into account. Now in May, the European Data Protection Board came up with a proposal of a methodology for the calculation of GDPR fines and submitted it for public consultation. And so there were some clients who came to me and said, well, we'd like to submit comments. Can you help us to analyze that from a critical perspective? And so then in that context, I thought, this is great, this is a great question. How are fines calculated? Or how will they be calculated in the future? So I basically started with building an Excel file to do that, and then I decided I'd build a web application for that. And so it's basically a calculator that implements this proposed methodology. Now, the proposed methodology is not yet final. When the European Data Protection Board integrates all of the comments that it received and decides what it's going to do with this methodology, if it's going to adapt it or not, then we'll have a final version and then obviously we're going to adapt our tool. But in the meantime, this is a calculator obviously free to use. And so then organizations can go there and try to figure out if I could be considered to infringe this or this provision of the GDPR, what would be the consequences, what are the scales basically of the fines? So it helps with a bit of predictability or foreseeability, but it's not the whole picture because right now authorities do not apply this methodology. They have their own practices. And so I also did a statistical analysis of 300 decisions to figure out how do authorities actually go about finding. And you see massive discrepancies. Some of the authorities are already pretty much in line with the GDP methodology and some are always finding on the very low end of the scale. And so in a way, it kind of insights, a bit of forum shopping, but I think it really helps organizations to have this clarity to figure out what are the practices, what are the trends in different jurisdictions and how they calculate fines and what could be the fine in the future if the methodology remains unchanged. So DeFine is there for that.

Henry - 00:17:12: So for those listeners who are trying to work out those potential fines from GDPR, do go check out the DeFine tool from Keller and Heckman as D E F I N E tool by Keller and Heckman. So before we get to a point of looking at fines and receiving fines, we obviously want to try and face any breaches before we get to that point. So there's been an IBM security report recently that came out about utilization of AI and automation programs and how they can help identify and contain breaches within 28 days or 28 days faster than those without. So what's your take on this? What's the kind of most common cybersecurity threats that businesses face and how do you see AI kind of integrating with that type of protection and identification?

Peter - 00:17:56: There are two great categories of threats, the internal ones and the external ones. And I still think that internal threats remain a very important vector that has to be really addressed internally by organizations. And so in that context, AI can be very useful in monitoring. And so you have to be very careful when you're doing a monitoring of employees. There are specific rules that you have to take into account, not just data protection rules. There are also specific label or rules that apply in certain countries and so on. But you can do some monitoring of what is going on on your network internally, what are suspicious activities. You can figure out patterns. And so once you've identified a pattern, you are able to figure out before a pattern repeats itself, you're able to identify what is going in the same direction. So this kind of analysis is something that's obviously facilitated by AI tools because in a way it's more efficient in certain ways, but certainly a bit more creative identifying certain kinds of patterns. And so these tools can be very helpful from that perspective, but it is a form of monitoring. There are specific rules that apply. And you also have the whole issue of what is the level of risk from a data protection perspective, which has an influence on what documentation you need. So AI tools for that can be fantastic, but there are specific risks involved and so you do have additional documentation that you might need. So there, you know, it's the data protection impact assessment, it might be issues regarding transfers. What happens also with the data that's being fed into this AI tool? Is it then being used to improve the lives for other customers of the AI tool provider? Because then you have other considerations that apply. Who's the controller actually of this particular element? So there are lots of fun data protection questions that, you know, for lawyers like me, but you do have to approach this from that perspective as well. And then you have the external threats where again, there's a lot that can be done in terms of pattern detection. But also the advantage of AI tools from an external threat protection perspective is that if you have a provider who sees attacks on different kinds of systems for different customers, then they're also able to identify more rapidly and share information more rapidly about what is a new threat actor what are new approaches, new attack vectors that are being exploited? And so AI can help here as well because again, it's about scale, it's about the speed of distribution of information and how quickly a system is able to react, because then potentially it can block an attack before it happens. That leads to other questions. But basically there are a lot of legal concerns attached to these tools, but their effectiveness is actually pretty good because you have additional ways of dealing with threats compared to what we're already doing. So I think they're a fantastic tool to add to your arsenal. I would definitely not replace everything with an AI tool at this stage, but you do have a great additional feather in your cap that you can use to defend yourself. It's a great way of having a more holistic approach and a broader approach to cybersecurity that makes perfect sense.

Henry - 00:21:12: Like any technology, it sounds like it could be a double edged sword where there are definitely all the benefits that come with it. But as you said, there are the risks as well. So do you have any recommendations for businesses who might be limited in terms of AI resources? Would you recommend something else to put in place that they could do instead of utilization of AI?

Peter - 00:21:31: Well, there's one aspect lack of availability of resources that I've noticed over the years is that a lot of organizations are thinking purely about themselves and they're thinking about themselves as an island and not necessarily willing to come up with partnerships. And sometimes you can have partnerships with organizations that are not your competitors. And together you can invest in a platform that makes sense for you altogether. And so you can have a bit of pooling of resources in these situations that really helps. Sometimes it's a matter of finding people within the same industry who are faced with a similar issue and trying to figure out can we build something together or can we together invest in a platform that will be beneficial to us all? So that's one perspective that I think is a bit underutilized. Then you have other avenues like just are basically already doubling down on your own internal processes because some of them are just not yet there. In terms of processes for internal security, for external security, we're talking about information security, but it's also physical security. How easy is it for someone to just follow someone through a door and then get access to the computers that are left unlocked at someone's desk? There are lots of other processes that can be improved before you really reach the need to have the additional level of protection of the AI tool itself.

Henry - 00:22:55: So that's the main bulk of our questions. In terms of the technical side, the final three questions that we ask all of our guests are really just to see more about yourself. So the first question we have is who in the world of data would you most like to take out for lunch?

Peter - 00:23:08: There's one person I interact with very regularly that I'm sure we will have lunch very soon is Romain Robert, who's one of the number two or one of the people directly involved in NOYB, together with Max Schrems and Romain and I, we've interacted a lot over the years through LinkedIn and other discussions. And so we're on opposite sides very often because here, together with his organizations, they're more of a privacy activist and my clients tend to be more data activists. And so then we're not always on the same side, but very often we learn from each other. But other than that, I have a few other people in supervisory authorities that I definitely like to take out for lunch one day.

Henry - 00:23:48: So Romain, if you are listening to this podcast by chance, Peter would like to take you out for lunch, hopefully. So our second question is what piece of software could you not live without in your day to day life?      

Peter - 00:24:00: Well, if we're thinking basically it would be web browsers simply because I have a lot of interaction with clients through sometimes through social media channels, but also in general through a lot of tools that basically are built upon web browsers themselves or integrated in there. But I think as a second it would have to be still a Microsoft Word simply because that is the main work tool that I use. But Excel has always been a very close third for me.

Henry - 00:24:28: I mean, with the starting point for DeFine on Excel, I can see why it would be an important tool for yourself. So our final question for you is when have you used data to solve a real world problem that you have had doesn't have to be professional, it could be personal.

Peter - 00:24:41: I had the case where some of my private data had actually been misused in a very fun way. It was a case of impersonation. And so I got called by someone from some random club in the Netherlands who basically said, ‘Hi, are you Peter Craddock’ and I said, ‘Yes, what is this about?’, and so apparently there was a guy who'd been using my name and trying to defraud this club somewhere in the Netherlands, I think a darts club or something like that. And so then I was actually then able to use my data to resolve a real world issue that was that I was being accused of defrauding someone who had never met. And so that was a very particular situation where I was able to use my data and some evidence about what I was doing and where I was usually and the fact that I'd never been to that particular part of the Netherlands. I often traveled to the Netherlands for business, but never to that particular part. I'd never been. And so this was all very useful information to help sort things out and then get in touch with the local police and so on. That was a bit of an awkward situation to be faced with.

Henry - 00:25:49: Don't worry, listeners, we did double check. The LinkedIn photo matched Peter who's joined the call, so we definitely had the right Peter Craddock. But that is all we have time for today on Ethical Data, Explained. Firstly, I want to thank Peter for joining us and thank you for your insight. Definitely thank you to the listeners as well. And Peter, we hopefully in future can get you back on for another podcast, maybe in a couple of months time.

Peter - 00:26:11: My pleasure. Thank you for having me.

Henry - 00:26:13: Thank you very much. Have a good day, Peter. Speak to you soon.

Henry  - 00:26:16: Ethical Data, Explained is brought to you by SOAX, a reputable provider of premium residential and mobile proxies, the gateway to data worldwide at scale. Make sure to search for Ethical Data, Explained in Apple Podcasts Spotify and Google Podcasts or anywhere else podcasts are found and hit subscribe so you never miss an episode. On behalf of the team here at SOAX, thanks for listening

Read full transcript
Picture of Peter Craddock
Peter has extensive global practice in privacy, data protection, cybersecurity, e-commerce, and software contracting. He counsels his clients on developing new initiatives to comply with shifting data protection and cybersecurity requirements. Peter is currently a Partner and the head of the EU Data/Cyber/Tech Law team at Keller and Heckman.