No Fear, AI is Here: How to Harness AI for Social Good

As tech leaders, we have a choice: Embrace AI to solve critical problems or remain fearful of its capacity to divide us. It’s our duty to champion the former.

Jane Moran, Chief Technology Officer, Benevity

April 26, 2024

5 Min Read
superhero cyborg or heroine robot with red cloak
Kittipong Jirasukhanont via Alamy Stock

Looking back on the past decade, I’m blown away by the number of positive advancements the world has achieved through technology. We’ve developed COVID-19 vaccines in record time, probed deeper into the solar system, advanced fuel efficient technologies, and built tools to aid in earlier disease detection

The one thing these breakthroughs all have in common? AI.  

Long before ChatGPT debuted in 2022 and the meteoric rise of generative AI, scientists and technologists were quietly leveraging AI to move faster and make bigger leaps in scientific progress. But over the past year, conversations surrounding AI have become increasingly controversial. According to the 2023 MITRE Report, only 39% of US adults believe current AI technology is safe and secure. While the reservation in trusting AI isn’t surprising, there has been plenty of airtime devoted to the myriad ways AI can go wrong -- and has already gone wrong. These are significant issues, and we should not ignore them. 

However, my biggest concern is that these high-profile issues mean we’ll oversee or even completely abandon AI’s power and potential to do good. The world faces an alarming number of critical unsolved problems, from climate change to racial inequity to entrenched poverty. Solving these issues before it’s too late requires collective action. And technology -- especially AI -- can and should play a central role in coordinating our efforts and driving progress. 

Related:Navigating the Impact of AI on Teams

So, as tech leaders, what’s our next move? 

We must proactively think about how our organizations can responsibly leverage AI for good. Our role is to offer our teams the support and guidance required to harness AI’s full power in ways big and small to inspire positive change, ensuring fear doesn’t override optimism. While AI has an undeniable advantage when it comes to its ability to outperform, it cannot replace the power of human creativity, perspectives, and deep insight.  

The Next Generation Needs Your Support 

It won’t be CTOs like me who come up with the next great idea for solving the world’s most critical problems. 

It’s going to be the people on my team -- and your team- who are in the trenches writing the code and building new solutions. And they’re going to use AI to get there. It can speed up the trial-and-error phase of every project -- generating code quicker, debugging faster, automating documentation, suggesting alternatives.  

These advancements mean we can arrive at meaningful solutions faster, but only if we aren’t scared to dive in. The best step we can take as leaders is to encourage the healthy exploration, experimentation, and critical thinking necessary to solve problems using today’s best technologies.  

Related:Generative AI, the Digital Divide, and the Tremendous Opportunity

I want my team to learn by doing and look for strategic ways to embrace this technology. To iterate until they arrive at their desired outcomes. That type of working environment requires building teams willing to take risks, with the grit and resilience to take failures in stride and adapt to changes in technology and ways of working. 

However, our job as tech leaders is not only to motivate capable teams, but to also provide them with frameworks for finding new ways of responsibly leveraging technology for good. To do so, we’ll need to stay grounded in the fundamentals of science -- facts, data, and measurement.  

I constantly push my product and technology teams to measure the value or incremental usage of their developments. For example, if my team is working on a platform to support volunteering, that means asking questions like: 

  • Are we attracting more volunteers? 

  • Is there alignment between the volunteers’ skills/interests and the needs of the organizations they serve? 

  • Are the volunteers using our platform more satisfied with the experience? 

Related:AI: Friend or Foe?

Setting goals and deciding how to measure outcomes offers teams the guardrails and grounding they need to produce tangible results. Every time we make a change or add something to our platform, we should be able to track the impact on user experience. However, aligning on measurement tactics is difficult, and many companies simply skip this step and dive into experimentation. But without clear goals and key performance indicators, it’s easy to veer off course toward flawed outcomes. 

This example of building a volunteering platform may sound small in the grand scheme of things, but it’s the same process we’ll need to follow as we ramp up our use of AI to solve the world’s biggest challenges. The key is to stay grounded in responsible frameworks that keep us honest about the real impact of our work. 

It’s Our Responsibility To Make AI a Force for Good 

There’s so much polarization in the world right now that keeps us from solving the most pressing issues of our time. Yes, at times AI can fuel those divisions. Or it can heal them. The choice is ours. 

We only have a finite amount of time to address climate change and related issues such as poverty and inequity. To get there, we’re going to have to try. And then try again. And again. Though it will be an uphill climb, AI can help us climb faster -- and explore as many options as we possibly can, as quickly as we can -- if we use it responsibly. 

The key is for tech impact leaders to bring forward a human-centric perspective to their company’s investments and use of AI technology, ensuring that their strategies don’t lead to unintended consequences for employment.  

Don’t let fear prevent you from getting all the help you can from the most powerful technology available. Your team, and the world, need you to be fearless. 

About the Author(s)

Jane Moran

Chief Technology Officer, Benevity

Jane Moran leads the charge in delivering Benevity’s innovative ESG-tech solutions as global companies recognize how purpose attracts, retains and engages employees and customers. Working with Benevity's product team, Jane propels Benevity’s growth, enabling more companies, from mid-market to enterprise, to engage stakeholders in ESG, purpose and social impact initiatives. 

Jane completed her Ph.D. in astrophysics, however, her skills in decoding complicated data, along with her ability to lead with empathy and authority, landed her in data science where her work has spanned understanding customer behavior and applying that understanding to data-driven solutions in targeted marketing and optimized digital experiences. 

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights