AI and big data: driving enterprise using ethical principles – SciTech Europa

AI Big data

SciTech Europa attended the AI for Good Global Summit, which included discussions of the ethics of AI and big data across the public and private sectors.

We spoke to Maria Axente, AI Programme Driver, PwC United Kingdom in the context of this event about how AI and big data can be used to drive enterprise using ethical principles.

How will AI and big data open up more opportunities for businesses?

AI and big data are making big changes in enterprise. It is worth considering how we got to the stage we are at today, and the fact that AI as a discipline is around 56 years old but it has suddenly started to be implemented at a much larger scale. I think there are three main causes of this more recent widespread adoption of AI. One is the huge volume of data that is being collected via smart devices; mainly smartphones, but increasingly IoT devices. The second one is computing power. There are increasingly more powerful CPUs that allow us to make sense of the data. Thirdly, the infrastructure and the bandwidth we have now means that we have more capacity to transfer and work with that data. This has created the basis for AI to make sense of that data.

It is significant that 80 percent of global data sits behind closed doors of organisations, while 20 percent of the data – which is still an impressive volume of data – is publicly available. This means a small proportion of the data available is being utilised. Consequently, there is a huge opportunity for businesses to look at what data they have dormant within their own enterprises, and how to best use it to align it with what they are trying to achieve.

Ultimately, while it is easy to define the technology of AI, the definition of data is less clear. We have so many varying definitions of data which means that there is no consensus on how to work with it. All organisations are there to achieve a mission and the process and technology must be aligned with what the organisation is trying to achieve. We don’t create technology for the sake of having technology, we create it to achieve or increase the commercial value and the value for social good.

Are there any current challenges in communicating the benefits of Artificial Intelligence to consumers?

The biggest problem we have now is that the meaning of AI is being sensationalised in the media, because it makes a very appealing story. AI technology is sometimes portrayed as though it can give us godlike powers, due to historic human interest in how we can create something that resembles us.

For many, AI is a passion, for people from across the board from entrepreneurs , to esteemed researchers. But many people are talking in a way that it is not really connected with what this technology can do right now. In the UK, the news that AI is going to take people’s jobs is featured frequently. AI will create some jobs, but it will also change how we live our lives dramatically and put a lot of pressure on people who already struggle in their day to day life. We’ve seen reports on new research that is being commissioned by The Institute for Fiscal Studies on growing economic inequality in the UK and across the developed countries. So there is the pressure of daily reality plus the technology which adds to this. People are questioning ‘how will this benefit me as a citizen?’

To address this, we must demystify AI. AI in its current state is not as intelligent as people, and is therefore not really ‘AI’. We have achieved the basic functions of AI, such as being able to learn to collect information from the external environment and being able to process it, but this is not done independently, rather than like a machine that works by itself. One example is Sophia, who is highly programmed to respond to some pre-prepared questions, but adds to the myth that it is able to think independently like a human would.

It is important to educate the public and drive digital upskilling. On one hand, in terms of digital understanding we are far behind; the vast majority of people claim they do not understand how the internet works. So how do we expect them to understand something as complex as machine learning or deep learning? On the other hand, people do not need to understand the technology in depth, but more to understand how it embedded in day-to-day life from accessing finance, to traffic management or health diagnosis and care. Ultimately, digital understanding is important for digital wellbeing, as legislation only offers a certain degree of protection, hence the essential need for personal accountability and awareness on how this new digital world is mingling with our offline world and our role as citizens. We are working with the UK Parliament and other groups to produce education content for AI for everyone, focus on explaining how AI is embedded in day to day life.

What are some of the successes PwC has had in implementing AI in their own business strategy?

I’m part of the AI Centre of Excellence in PwC, UK, and we have a dual mission to help our clients to make sense of technology, and also understand how AI is going to disrupt the way we operate as a business. We’re a professional services firm and a lot of what we do is based on the expertise and the skills of our people. We are knowledge workers and how we use knowledge to make better decisions is evolving.

One example of a recent success – which is drawing a lot of internal attention due to its simplicity is a resource optimisation tool. In professional services, we allocate our staff to thousands of projects with the same clients. But, there are other lines of service which staff need to be engaged in,this tool allows us to allocate the best people for the job based on set criteria. It is not only matching the expertise of the person to the project but also factors such as the distance to travel and the diversity of the team come into play. We are looking to work with our other PwC firms to see how to best create a bespoke solution for them that will allow better allocation of their resources, to the projects they have. What started as a basic optimisation solution, will now have dynamic machine learning added to the AI so that it learns in real time based on the feedback of people in engagements.

What are the future research priorities for PwC’s AI programme?

PwC is in a unique position in that we are not a technology company, but we are tech enabled.. Our mission is to help our clients build better businesses, to build trust and support society as a whole. Therefore, we always put technology in the context of the organisation’s mission and what they are trying to achieve. We ask two important questions when deciding how to use technology.. Firstly, do we really need AI to solve the problemy? For example, using machine learning when a solution to a problem could be simpler would be a mistake, we are developing new considerations of when to build and deploy AI, defining the problem and the context and then defining a solution that is feasible and also cost effective. Secondly, what are the ethical principles of using AI in the context? We are about to launch a responsible AI toolkit, which is, in a nutshell, a 5 pillars approach on software tools, frameworks, methodologies and labels to help our clients embed ethical principles in their AI solutions.

When I say ethical principles, I go beyond the device or transparency, which are probably the most well-known and well discussed ethical principles now. There are more considerations such as how can you have a human in control of the AI? Our platforms help our clients to contextualise their ethical principles, determine what are the ethical principles that will apply to their business, what AI solution they should have in place, how they can monitor and see the level of fairness embedded in the AI solution, determine the level of transparency required based on the solution, and how to ensure the solution is secure and robust.

When you deploy an AI solution in certain contexts, it requires management rather than just deploying it and having it deliver benefits by itself. Sometimes the AI requires fine tuning, so the organisation needs to be able to understand how to monitor it in a sustainable way and be able to correct it if something goes wrong.

Our toolkit to enable this is going to go live at the beginning of summer. Part of our presence at the AI for Good global summit is to launch a diagnostic to give a flavour of what the platform is and highlight the key elements that the clients need to account for and the ethics of their solutions.


Maria Axente

AI Programme Driver, and AI for Good Lead


Tweet @PwC_UK



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s