- Avoiding digital dystopia requires caring about the ethics of technology
- Businesses shouldn’t merely think technology, but ethical technology
- Will building ethical tech regain public trust?
- Companies of the future need to be ethically and socially responsible
Automation and robotics have been major game changers for decades. Cutting-edge technologies are disrupting industries, and providing tools to help organisations become more agile, smarter, and technologically advanced, allowing them to better prepare for the future. However, little attention is paid to ethical principles when it comes to how technologies are developed and used, especially AI-powered systems that are fed with torrents of data. How this data is collected, transmitted, and used to train AI systems are all important considerations, as data analysis leads to actionable information that’s important for decision making. And if an AI-powered system is not designed in accordance with ethical frameworks, there’s always a possibility for a system to make very serious mistakes.
The World Economic Forum states that “technologies have a clear moral dimension – that is to say, a fundamental aspect that relates to values, ethics, and norms. Technologies reflect the interests, behaviours, and desires of their creators, and shape how the people using them can realise their potential, identities, relationships, and goals.” There’s still quite a bit of confusion, however, surrounding the notion of ethical tech in terms of where ethics belong in the equation and why we should care about it now more than ever. And there are various definitions of ethical tech. DigitalAgenda, a UK-based clean tech think tank, explains it well: “Ethical tech is, at its heart, a conversation focused on the relationship between technology and human values, the decisions we make toward technological advances, and the impacts they can have.”
Avoiding digital dystopia requires caring about the ethics of technology
Cutting-edge technologies are paving the way for a better future. However, while new technologies allow better efficiency and data-driven decision making, it’s essential that we know how these technologies operate. Let’s take, for example, data collecting. Mobile phone carriers and social media sites collect vast amounts of data and can share this with – among many others – government agencies, law offices, and immigration surveillance departments. This is usually done without users even knowing it. If surveillance is conducted and the gathered data is used to track down protestors or reporters, it can violate some of the basic human rights or constitutional rights of the country where such practices occur.
While there are impressive examples of how AI-powered tools benefit various sectors across industries, malicious actors can, for instance, use AI-powered tools to create deepfakes. These fake videos are especially dangerous when the content involves real faces of influential people saying or doing whatever deepfakes’ creators want them to – things they never really did do or say. Educating the public about deepfakes can create awareness and encourage people to question the accuracy of anything they see in the news.
Another major problem with AI-systems is that they are often racially biased. Some of the facial recognition tools police use to identify a suspect can be painfully wrong, which can result in arresting the wrong person. According to an MIT study, three commercial gender-recognition systems showed error rates of up to 34 per cent for dark-skinned women – a rate nearly 49 times that for white men. Another study found that error rates for African men and women were two orders of magnitude higher than for Eastern Europeans, who showed the lowest rates. A test across an American mugshot database revealed that algorithms had the highest error percentages for Native Americans and very high error rates for Asian and black women. Even Detroit Police Chief James Craig admitted that facial recognition technology is accurate only 3 to 4 per cent of the time. “If we would use the software only to identify subjects, we would not solve the case 95-97 percent of the time. That’s if we relied totally on the software, which would be against our current policy. If we were just to use the technology by itself, to identify someone, I would say 96 percent of the time it would misidentify.” This only means that the technology still hasn’t hit its prime, and that developers should put significantly more effort into fine-tuning its skills.
Businesses shouldn’t merely think technology, but ethical technology
Advancements in technology allow businesses to reach their goals more easily and far more efficiently than a couple of decades ago. Companies that heavily rely on smart technologies, however, shouldn’t only adhere to government regulations, but also set company limitations since the tech they use to track internet usage, online behaviour, buying habits, and more, may compromise the privacy of their employees or customers. We all remember the largest data breach scandal involving Cambridge Analytica, in which data from individual US voters was illegally used to build personalised political ads. Christopher Wylie, who worked with a Cambridge University academic to obtain the data, revealed that they used “Facebook to harvest millions of people’s profiles, and built models to exploit what we knew about them and target their inner demons. That was the basis the entire company was built on.”
Unfortunately, the scandal was just a snapshot of a much bigger problem. Emma Briant, an academic at Bard College, New York, who specialises in investigating propaganda, warns that “The documents reveal a much clearer idea of what actually happened in the 2016 US presidential election, which has a huge bearing on what will happen in 2020… This is an entire global industry that’s out of control but what this does is lay out what was happening with this one company.” In essence, despite governments having passed legislations restricting the use of personal information and allowing users to have some control over their personal information on the internet – serious data breaches still happen.
Then there’s the issues regarding employee monitoring. An MIT study shows that when employees knew they were being monitored, profits actually increased by 7 per cent as employees were more efficient and conscious of their work. Boni Satani, head of marketing at Zestard Technologies, underlines that there are benefits to using employee monitoring software. “Whether it’s done maliciously or accidentally, employee monitoring software is in a position to alert employers when a user accesses data they’re not supposed to.” However, excessive access to personal information such as health records, bank accounts, or personal correspondence may be dangerous as such data can be exposed or misused.
Accidental or not, invasion of privacy is a major concern when choosing to implement such monitoring tools, and businesses should carefully weigh the pros and cons. Accenture’s extensive report Decoding Organisational DNA showed that a bit over 50 per cent of employees participating in a survey said that using monitoring tools damages trust, and 64 per cent expressed concerns that employees’ personal data might be exposed, reflecting on the recent data breaches. It’s encouraging, though, that as much as 92 per cent “of workers are open to the collection of data on them and their work, but only if it improves their performance or well-being or provides other personal benefits”. Given that $3.1 trillion of future revenue growth is at stake for large companies worldwide, there’s no doubt that more attention will be directed towards adopting responsible data strategies that would keep employee trust unblemished.
Will building ethical tech regain public trust?
Unfortunately, a practical ‘how-to’ guide to building ethical tech isn’t available yet, but more and more companies are working towards finding ways to build inherently ethical technologies. In order to avoid privacy issues related to newly developed tech solutions, it’s critical to put much more emphasis on security when designing a product or service. Luckily there are tools to help engineers and developers avoid issues around bias and discrimination or limit the development of addictive tech that negatively affects the human psyche. One such tool is the Ethical OS. It contains a checklist of 8 risk zones, 14 scenarios that can help gain a perspective about the long-term impacts of the product you’re developing, and 7 future-proof strategies. Luckily, more and more companies have started using this or similar tools to ensure they build ethical tech.
For example, Marc-Loyd Ramniceanu, co-founder at NetCloak, shares that they “are applying the Ethical OS framework to shape not just our strategy and process but our core values as well.” Nicklas Bergman is a venture capitalist with Intergalactic Industries, a Swedish seed-stage investment firm focused on nanotechnologies, genes, and brain-interfacing technologies. He notes that he’s aware that ethical aspects of technology are often overlooked. “We need more time, space and tools for discussing these issues. Enter EthicalOS, a great set of tools for anyone facing the challenge of navigating in a world where technology can have hard to predict consequences.”
Companies of the future need to be ethically and socially responsible
The rise in digitalisation hasn’t only brought a slew of cool cutting-edge tech we use on a daily basis, but has made us aware that technology today, such as artificial intelligence, is fed with torrents of data, some of which can be personal and are often shared without our knowledge and consent. Thankfully, governments are taking the lead by passing legislations and regulations to protect user privacy, and more and more companies prioritise their consumers’ demands to have their personal data protected. Ethical and social responsibility in the workplace helps ensure that when leaders are struggling in times of crises, they retain a strong moral compass. Attention to business ethics provides numerous other benefits, as well. Companies making positive choices and decisions will also have a positive impact on productivity, employee rights, and consumer trust.
Share via: