Sue Turner - Founder and CEO of AI Governance

Sue Turner OBE, recognised as one of the “100 Brilliant Women in AI Ethics”™, shares her personal journey into the AI & data field, & why she believes expanding her own knowledge will help to shift the power & encourage people to take control of the way AI is used in business & in everyday life.

Bcorp Logo

We were thrilled to catch up with Sue Turner OBE, a specialist in AI, data governance and ethics and recognised as one of the “100 Brilliant Women in AI Ethics”™, for an insightful and engaging interview. She is the founder and CEO of AI Governance, and a non-executive and executive board member.

Sue shares her personal journey into the AI and data field, and why she believes expanding her own knowledge will help to shift the power and encourage people to take control of the way AI is used in business and in everyday life. She gives a valuable perspective into the ethical considerations, capabilities and potential risks associated with AI, plus offers key insights that businesses can learn from adopting their own AI ethics strategies.

We learn why Sue is hopeful that, in the age of AI, humans will continue to have an appetite to learn and be curious, and why she is advocating for diverse voices and experiences in the tech industry so that everyone can have a seat at the table.

Finally, we discover why Sue advises businesses to cultivate and upskill their internal talent, and to ‘grow your own’, rather than joining in the futile battle for talent, so they can be prepared for a tech-driven future...

Sue Turner - Founder and CEO of AI Governance



Hi, Sue. Thanks for agreeing to be interviewed and great to be connected with you. To begin, could you please introduce yourself and your professional engagements in your own words?

My name is Sue Turner and I've spent 50% of my career working for very dynamic, very entrepreneurial private sector businesses and 50% leading not-for-profits. I try to cross-fertilise the best of each to the other; helping the profit-taking worlds to think more about purpose and impact, and helping the third sector and not-for-profits to think more about innovation and to use some of the entrepreneurial tools that can make a huge difference.

I chair one organisation and I'm a non-executive director of another, and I founded and created AI Governance. I'm also a mentor with The Alan Turing Institute, where I help Fellows to think about policy and skills and how to have more impact.

I founded AI Governance around the same time I was completing my Master's degree in AI and Data Science because I recognised I didn't want to sit around coding all day (fun though that is)! Instead, I wanted to figure out how to leverage these tools to help people use AI with wisdom and integrity. Figuring out how to use AI is one level of difficulty, but using it with wisdom and integrity is a whole other ball game!

We work with regulators and standard setters to figure out the rules for AI and conduct our own research. The 2022 AI Governance Report found that 58% of organisations have no AI expertise on their Boards, let alone ideas on how to use it, and 91% of organisations have no controls on their AI use. This led us to think about what services we could provide to fill those gaps. We offer leadership education, board development and consultancy, helping companies devise their AI governance and controls. We work with all types of organisations from the NHS and housing associations on the not-for-profit-side through to FTSE-listed businesses and very entrepreneurial, dynamic companies.

This sounds fascinating, Sue, and very busy. I’m curious, where do you find time to sleep?

It's about spinning lots of plates! I'm very fortunate to have reached a stage in my career and my life where I can pick and choose. If something comes along and I don't think I'll add value, or it would make me feel very stressed, then I don't do it. Instead, the things I invest my attention and time into are the things I enjoy and where my skills make a difference.

When did you first become involved in AI, and what drew you to it? Was there anything in particular that inspired you?

Yes, a couple of things. Most of my career has been spent in marketing, communications and government relations and so nothing technical. However, I always liked tech and wanted to do something more with my career. It's difficult to make the transition without going back and doing a three-year computer science degree, and I couldn't see a clear pathway to making the move across.

It came to a head back in 2018 when I was CEO of one of the UK's largest and oldest community foundations. We were awarding £5 million a year in grants to help disadvantaged people and communities and I heard wonderful stories about the difference the money was making to real people's lives. Yet, like most organisations, the data was kept in silos or trapped in a database. As a leader, I knew if I could link that data with other information it would provide us with valuable insights. It would enable us to do more, and be a better organisation.

So, I knocked on Bristol University's door and explained the data I had and that I'd heard about Artificial Intelligence and Natural Language Processing (a type of AI that finds patterns in unstructured text). I said to them, “Plug me in, put my data into the machine!”. To cut a long story short, nobody knew quite how to plug my data into an AI model, so I had to find out for myself! Around the same time, the UK government was supporting the creation of new MScs at several UK universities to attract more people into the field. These MSc conversion courses welcomed applicants without a maths or computer science background. It was fascinating to learn from scratch including a five week coding boot camp at the start.

Apart from wanting to know about these tools myself, I could also see that AI will absolutely dominate our lives. During all of my career, and all of my life, I've been trying to shift power. You can see with AI and the whole data world that a tiny number of people and companies control the data and the technology. I realised that if I can expand my knowledge to help others do the same, then we're shifting the power. We’re recognising the concentration of power — but we're also helping to shift it. I’m doing this so people can ask better questions and take more control of the way AI is used in their lives and their businesses.

That, in a nutshell, is what attracted me to the AI world!

For those less familiar with AI, could you elaborate on what a typical working day looks like for you? What projects are you involved in, and what are some of the challenges you encounter?

I like variety, so there are no typical days, which is good!

Firstly, a significant portion of my time is spent helping people grasp what AI is and covering the basics. It's about harnessing the power of computers to find patterns hidden in large amounts of data, and then using those patterns to predict, personalise or automate things. "What data have we got?" and "Is there a pattern?" are always the first questions. Then, if a pattern emerges, "What do we want to predict, personalise or automate on the back of it?". So we're always looking for the real problems an organisations has that we can solve.

During my webinar presentations, I use a robot with an Nvidia circuit board and a camera that is designed for machine learning (a type of AI). It's like an advanced version of the Raspberry Pi that is often used in schools to help pupils learn about computing and coding. I use the robot to demonstrate the difference between ordinary computing and machine learning.

I spend a chunk of time each day preparing for client meetings and upcoming presentations. With clients we get into the detail to work out how to govern their use of data and AI better. To do this, I ask questions such as, "How do decisions normally get made in that organisation?" and, "With all the complexities around ethics that AI brings, how can we make good decisions in the organisation? What governance, guardrails and frameworks will work for them?".

I also keep up with the latest developments and scour the world trying out new AI tools, exploring who is doing what and how I can help other people understand what it is.

Finally, I conduct research and aim to influence government policy. In the UK, we're currently preparing for a global summit on AI safety that will be hosted by the UK Government in November 2023. Trying to influence big thinkers and decision makers in that sort of field takes time too!

What are some of the challenges that you might encounter that might be quite different to other people's challenges in the world of business?

One of the big challenges is credibility, and I think we all experience this at different levels. If you work for DeepMind or Oxford University for example, something that gives you a badge, then automatically people say, "The door is open, you may come in." Whereas if you're working for an organisation such as AI Governance, which they may never have heard of, it's very hard to get a seat at the table.

Consequently, I spend time seeking out influential and receptive people who will give me, and others outside the inner circle, the chance to be inside it. This is one of my big motivations; if leaders only listen to the small group of people who currently hold the power, then other voices are not heard. That means society is missing out on the opportunities to learn about other people's experiences and make sure regulations and systems include everyone.

Recently, US Senator Chuck Schumer organised an AI forum. He gathered the big tech CEOs in a room, and said, "No media; we're going to have this big conversation about regulating AI in the US at the federal level." However, he also allowed people from civil society and wider sectors to be present. Although we don't know how much influence they were allowed to have, I hope simply by being there they changed the views of some of the top tech leaders; those who tend to think they know everything and have nothing to learn from the outside world. We can all learn from each other, and should have the opportunity to contribute our valuable life skills and experiences.

You inspire and support Boards to use AI and data for profit and social impact and your mission at AI Governance is ‘to inspire as many organisations as possible to use AI with wisdom and integrity’. From your perspective, what strategies can businesses adopt to achieve these objectives?

Currently, there's little regulation that specifically relates to AI use, but we do have the General Data Protection Regulation (GDPR) and intellectual property laws. That said, there's a tendency for some to find ways around this law, or to come up with a business idea and say, "We'll do it because there's nothing stopping us from doing it.”

It’s about making ethical choices. A good example of this is the transportation company, Uber, who infamously sacked some drivers via text message. It used an automated process behind the scenes and an algorithm that compiled data about drivers, before deciding who should be allowed to continue and who should be stopped from being an Uber driver. The individual is notified of this outcome via an automated message and there's no way for them to appeal or find out why or how this decision was made. Somebody at Uber has made an ethical choice to do it that way.

In some parts of the world, the GDPR states we have the right not to be subject to automated decision making and in the UK, as an Uber driver, we would be able to challenge the action and ask for a human decision. However, in other parts of the world, there's no recourse to an appeal.

In business, we need to think "Just because we can, doesn’t mean we should". Another example, from almost 20 years ago, concerns the music sharing and streaming app and platform Napster. The leaders of the business should have considered that what the business was doing, at its heart, was ripping off musicians' IP and giving it away for free. If someone had taken a moment to reflect and said, “Just because we can, doesn't mean we should," they might have recognised it wasn't a good idea. Napster only lasted for a couple of years and collapsed under the weight of legal cases, but that business should never have got off the ground in the first place. Its leaders should have been wise enough and used integrity to say, “Hang on a minute, this isn't good. Let's find a way of making music available AND rewarding the artists."

I work with a logistics company who could simply recruit new MBA students to help with AI and data in their business. However, I advised their CEO that it might take considerable time to get them up to speed and to really understand the business and, eventually, they'll be poached by somebody else. Instead, I highlighted the benefits of educating and developing skills internally; to encourage existing staff to think about how to use data differently, to understand what machine learning is about and get really excited about doing new things for the business. This has a much higher social impact, provides people with new skills and domain knowledge and impacts the business positively. It also increases employee loyalty, since you've invested in existing staff and helped take them into the next generation of the world of work — rather than chucking them on the scrap heap!

Another area to consider is the environmental impact of using AI. Sometimes, companies will tell me about a wonderful new large language model they're training with their own data. Generative AI can leverage internal business data and use that to write and create things that help both internal stakeholders and customers. However, I ask them to consider the environmental impact of taking that action. When we train a large language model for example, it uses significant energy on computer processing. This in turn uses huge amounts of water to cool the data centre. So, I ask my clients to evaluate the social and environmental impact of a new solution, and who it will make a difference to.

I've worked with businesses who design things using AI that appeal to, or work for, 60% of people. This works, as it's their target market and they're happy with that. However, I advise them to consider the other 40%; those who may be digitally disadvantaged or lacking the money for data that prevents them from using the service or people who have sight or other physical problems. I ask them to consider what might happen if they design something that 100% of people can use! Sometimes, it's a challenge too far but other times they realise it would be better for the 60% in the middle and also include the 20% either side. It's encouraging people to think in a different way and take a broader view. It's not criticism; however, sometimes you need somebody to look outwards and look up, and consider a different viewpoint or way of doing things. It can open up conversations and be very stimulating.

I also gather stories on how companies are using AI and then relay that in a different way to another client. One of my clients is in financial services. They were sceptical about using IT and AI for customer interactions, which was understandable as we have a duty to look after the interests of customers and they thought automation might be undesirable for customers. However, I shared with them the example of a company using AI to identify potentially vulnerable customers. They analyse the words and tone of voice customers use when contacting the call centre. When a vulnerable customer is identified, they are directed to call handlers who have no time constraints and can devote as much time as needed to assist them. This has led to a reduction in dissatisfaction among customers who feel genuinely heard and valued. It's a great way of thinking.

Technology isn't always evil! Most of the time, we can use it for really good purposes — we may need to push ourselves harder to think about what these are.

In your opinion, how can leaders think ahead when it comes to AI and learning and development, and what value can it have on their business?

All of our teams in the future will have significantly more people who can understand data. One of the crucial things I advise leaders is to think about who in the team they can upskill. Who thinks logically, who sees patterns? These are the key people they should train into roles that analyse data and use AI tools. Linked with that, I always tell people not to join the war for talent, but instead to grow your own! Otherwise, if we simply try to poach from each other, there is a zero-sum game in the end. You'll lose as many people as you gain and this will be costly. I advise clients to consider who they can move away from spreadsheets (which still dominates so many of our organisations) and train them to use accessible tools like Power BI before moving to deeper data analytics tools, which AI fits into.

What practical implications do you think AI will have on the workplace, and in particular education and L&D?

From a workplace perspective, I think we'll see a huge revolution in middle management, where typically we see people gathering data and feeding it upwards. Those jobs are ripe for automation. HR is another area poised for significant change. The whole spectrum of HR activities has potential for adding more automation; everything from identifying who we should interview for jobs, through to finding patterns in exit interview data.

In the L&D and education world, I anticipate one of the biggest changes will be in personalisation, which can instigate enormous changes in learning styles, pace and course content. AI can generate a training programme and create scripts and videos. Although some human oversight is needed, AI can already do a huge amount of the basic tasks. So, if we can gather more information about an individual and then have AI generate the training for them, I think it will be a huge revolution — and it's just around the corner.

In your view, what are the advantages and disadvantages of AI in the workplace?

I spend a lot of time on this subject in my webinars and when teaching people the basics of AI, how they might use it and the risks. Leaders, wherever they are in an organisation, and aspiring leaders, must be familiar with what the pros and cons are.

On the plus side, wherever you have a business or organisational problem, if it can be solved by predicting, personalising or automating, then AI is your tool. One way or another, there will be a tool if you can gather the data in the right sort of way. The opportunities are huge.

On the cons side, the disadvantages though are, again, huge. There are risks around data and data poisoning. If we have a system that provides training and development and we want it to continually learn from the pace at which people are moving through the course, the feedback they provide and their eye contact — if it's on a camera — there's an opportunity for data poisoning. A bad actor, or something going wrong, puts data in which can corrupt the entire system. A small risk can have a huge impact. There are also risks around how we think about solutions. If we don't take a broad enough perspective on the solutions we're proposing, that can be damaging.

There are also potential disadvantages about the ethical choices we make and leaders get very little training in ethics. My daughter is currently at university studying business and management, and although there's an ethics module, it doesn't permeate through the entire course content. If you've been in the workforce for several years, you may never have been trained on how to address ethics in business. People shy away from it because they don't spend their day talking about it. It seems very strange and difficult — so we're on a mission to help people understand how to frame those ethical choices and not ignore them.

Do you think AI has the potential to enhance and accelerate learners' abilities and expand their knowledge and soft skills, or do you think it could help them to cut corners?

On the positive side, when people want to learn something, I think AI is a brilliant tool for changing the way we learn; offering things up in a way that we find most digestible and memorable and creating learning modules incredibly fast. I'm continuously learning and I know that Generative AI, for example, can create a brilliant first draft of a proposal I need to put together. However, I have to check if it's quoting facts and data; it might have simply made things up!

Therefore, learners need sufficient knowledge about these tools to enable them to make informed choices and to spark that critical thinking, rather than depending on them and assuming their accuracy. I do worry that, in the future, people may become complacent about the need to learn. If they do want to learn, then these tools can be brilliant. However, there is a risk of complacency if people believe they don't need to remember or to learn simply because they can look up the answers or ask a machine!

As a fundamental part of human nature, I hope we'll always want to be curious and continue to push ourselves. I think, certainly in the context of young people and the learning process, the ease of accessing data could make people less curious though — and that is a worry.

How do you think businesses can prepare for a tech-focused world and think ahead, and what skills will be fundamental to build? How can they avoid joining the losing battle in the war for talent?

The first step is to value data more and those who can wrangle that data into something useful. This should be a number one priority. Whenever I go into an organisation, I always start by asking, "What data do you have and what do you have permission to use it for?", and then I build from there. It's a constant frustration for me, and for many leaders, that data is generated in silos in organisations and then kept there. You can't read across the silos if everything is stuck behind these walls, or kept on a spreadsheet or laptop. I ask, "How do we get the data out of there and make it usable?".

Secondly, leaders need to be as comfortable talking about data as they are talking about their instincts. I've had extensive experience in boardrooms and come across leaders who say, “Well, I believe the right thing to do is...". However, it's uncommon that leaders will say, “The data actually tells us the right thing to do is…” This necessitates a change in mindset we all need to get comfortable with, and some people will be better at it than others. We must remove our own prejudices and biases that lead us to say that our way of doing it must be best simply because we believe in it. We must look at the numbers, the information and the wider data scene which may tell us something different and challenge our assumptions and preconceived ideas.

You've spoken on a diverse range of subjects including, “Can philanthropy and technology combine to close the digital divide?". This sounds fascinating; would you be able to provide an insight into your views on this subject?

This particular webinar idea originated from a dinner I set up in February 2020, before the first Covid-19 lockdown, and the last live event we ran that year. We gathered a group of leaders, successful tech business leaders and philanthropists who were doing interesting things to help others with their money. We also brought together people from communities; the real disadvantaged people who have the voices to explain what life is like for them.

I find it fascinating that a typical tech entrepreneur will develop, perhaps, three businesses. They will grow and sell each one before reaching a point where they realise they have more money than they could ever need. Considering what to do next, they explore philanthropy, but tend to get into it thinking they are an expert, rather than approaching it as a beginner. They tend to assume they know the solution, rather than being guided by the data. They may be tempted to dictate to a charity or community saying "I'm going to give you a million pounds and this is what I want you to do with it...". It takes more wisdom, and humility, to listen to the people with the issue who, quite often, want somebody to fund what they already know to be the solution, rather than tell them what to spend the money on. I find this whole area really fascinating; I believe philanthropy and technology can solve some of today's problems — but this can only happen when more tech leaders approach it with a beginner's mind, and listen rather than tell.

From an AI and Data Governance perspective, how can businesses benefit from adopting AI ethics strategies of their own?

There are three main benefits I ask companies to consider when thinking about these issues more broadly.

(1) AI uses and experiments with data, so it's more likely to fail than to succeed. 85% of data science projects fail, although that doesn't mean you learn nothing from them. It simply means they don't achieve what you originally intended. When you actually spend more time understanding AI governance and ethics, you can get more value out of those supposed failures.

(2) Something won't fail because you've had a more diverse group of people, for example, and thinking about things before you implement them can save you time! Your chance of success can increase and the chance of waste decreases.

(3) Consider your reputation. When you can demonstrate you've considered the negative consequences, and deliberately refrained from taking certain actions, it can be very beneficial. Print and digital business Xerox wanted to reduce the number of employees who were leaving the business shortly after they joined. They noticed a correlation; those who lived further away from the office were more likely to leave. They could have deliberately set their hiring algorithm to not interview those individuals. However, they wisely recognised that their office location was in an affluent area — and those living further away tended to be living in less affluent areas. Excluding them contradicted their company values, so instead they removed that data from their hiring model so that people living further away were not less likely to get an interview. A very wise and ethical decision — you don’t want to end up on the wrong side of history!

The Women in AI Ethics™ (WAIE) is a global initiative with a mission to increase recognition, representation, and empowerment of women in AI Ethics - and you are on the 2023 list of "100 Brilliant Women in AI Ethics”™. Congratulations! What does this mean to you, and why is it important to increase recognition, representation, and empowerment of women in AI Ethics?

It means a huge amount to me. I mentioned credibility earlier; to have other people believe you're making a difference is a huge boost to that credibility and to one's self-esteem. I was very grateful for that. 51% of the planet is female, and our views sometimes differ from those of people who are not female. Having that recognised is really important.

Until 2022, there were no female crash test dummies for car testing, which meant that women were 17% more likely to die in a serious crash and 73% more likely to suffer serious injuries. Although I don't blame men for designing a world that suits themselves, now we should know better!

So that’s what it means to me; that we can harness everybody's input for AI - and that makes the world better for everyone.

How do you keep your skills and knowledge on AI fresh and up to date? Is it a challenge?

To be honest, it's not a challenge personally. I invest time daily to explore what's going on out there and I'm constantly curious with an itch to know more! I don't play computer games, so this is my version! I'm constantly thinking about what something is doing, how it works. Same part of the brain, just different applications.

Finally, I understand that Bristol is set to build and host the UK's most powerful supercomputer, Isambard-AI, to turbocharge AI innovation and drive pioneering AI research. I wondered what you thought about this?

Bristol has a big history in supercomputing, and there are some really interesting people based in and around this area doing some fascinating work.

I could have at least a half an hour conversation about supercomputing and quantum computing! For now, I would say the world is rich and full of opportunities — and I'm very happy when some of those opportunities come the way of Bristol.

Sue Turner is an AI, Data Governance and Ethics specialist and Founding Director of AI Governance.


Would you like to feature in the series?

Our interviews are conducted by Nicola Greenbrook, a highly experienced HR specialist-turned-writer.

If you would like to chat about your own experiences in Learning & Development, Human Resources, or Talent Acquisition, we would love to hear from you - please use the contact form below and we'll get right back to you.

We would love to hear from you

Speak to us on 020 3137 3920 or get in touch below

Get in touch