As big data products and identification systems proliferate, debate intensifies. Establishing consensus over substance and privacy principles becomes increasingly paramount. But perhaps even more important, design teams won’t be able to ignore the need to incorporate privacy and other ethical questions into their product designs. This issue goes beyond theory and discussion; there are real world implications.
Numerous instances of discriminatory and biased algorithms have come to popular attention, highlighting the risk and prevalence of subconscious and implicit biases embedded in modern technology.
In the wake of GDPR, nearly every company updated their privacy and data policies, and even their designs — acknowledging the role of the consumer in their products, and the messiness and complications of big data.
India is in the process of building the world’s largest digital biometrics identification system: Aadhaar. Scanning fingerprints, recording iris’, and linking demographic information, Aadhaar connects 1.3 billion residents to all public and many private services — wifi, healthcare, train tickets, schools, and so on. However, 30 cases have been brought against the program in the national Supreme Court and 210 government websites have had personal data leaks.
The program intended to highlight India’s technological progress and innovation has instead highlighted individual vulnerability and concerns of data security and privacy. It is therefore essential that when designing and building new products we include those who may not have access or have been forgotten in the ever advancing and digitized technological world.
In China, facial recognition cameras have already become part of daily life. In conjunction with AI algorithms, facial scanners are being used for everything from recommending a personalized KFC order to police tracking persons of interest. The massive data set, in conjunction with the many associated algorithms, is maintained by the government. The Chinese government is working closely with the tech industry and aims to make the video surveillance network “omnipresent, fully networked, always working and fully controllable,” official documents show. The advanced technology systems and the data itself is already highly centralized and tightly controlled.
The digital products and systems we’re designing and building today are, and will continue to impact people around the world. Where the World Bank views digital identification as a “game changer and a force-multiplier in the global push toward poverty alleviation, access to finance and shared prosperity,” critics note that those most in need fall through the cracks. They argue that such systems in China, India, the United States, and elsewhere, are steps towards creating a surveillance state, and one that tends to target the poor. Technology is not yet foolproof and much of the onus to ensure a degree of infallibility and protection falls on those building and enabling these sorts of systems. The burden is then on the designers to embed ethics into design specifications and ensure individuals are protected if and when the principal governments and companies are unable to do so.
Design Ethics at TribalScale
As a designer at TribalScale Venture Studios, I have the opportunity to shape products from the ground up. The impact that I have on the product starts from ideation, which means I have the opportunity to think about the implications my designs will have on users. I am constantly asking myself how the product will shape a certain system, or will provide an alternative (and better) way of doing things. In the past, I’ve worked on many products that filled an important gap or provided a social good, but I’ve also been faced with projects that went against my moral beliefs and challenged my role as a designer. In this series, I will also be talking to some of the designers on my team about how they’ve handled such instances, and have worked on projects that they do not morally agree with.
Most of the projects I work on, being digital products, take-in and utilize large amounts of data. The ethics surrounding these products have a huge impact for the user, and as a design strategist, this is more and more pressing. I am a firm believer that, because even the most automated algorithms are written by humans, there must be a degree of bias and ignorance written into the product — and we’ve all seen how researchers are grappling with instances of bias in facial recognition systems. Organizations have also begun to tackle bias in machine learning algorithms. For example, Accenture recently unveiled a tool that helps companies ensure that their AI software isn’t racially, ethnically, or gender biased. At TribalScale, our design team compiled information and best practices to help our clients think through their products in the context of GDPR.
Design and technology ethics are increasingly prevalent, and it’s important that we discuss this issue and also figure out how to best deal with projects that don’t suit our moral beliefs. This 3-part series will address some of these issues.
Betty is a design strategist at TribalScale Venture Studios working in trend forecasting, product and service strategy and research. As an interaction designer and mixed-media artist, she creates immersive experiences and installations in digital and physical spaces. She is currently exploring gestural interactions, sound installations, and wearable technology.