Reggie Townsend Unpacks the SAS Strategy to Accountable AI


For the reason that mainstream launch of ChatGPT, synthetic intelligence and its guarantees and pitfalls are on the minds of extra folks than ever earlier than. As an alternative of retreating from the dangers and uncertainties that AI brings, SAS VP of Knowledge Ethics Apply Reggie Townsend needs us to satisfy this second collectively.

“It’s a second that impacts all of us, and we’d like all of the sensible folks to convey all of their abilities to this dialog,” he stated throughout a press convention at SAS Innovate in Orlando final week.

Along with main the Knowledge Ethics Apply, Townsend is a member of the Nationwide Synthetic Intelligence Advisory Committee that advises the President on points associated to AI. He’s additionally one in every of our 2023 Datanami Individuals to Watch.

Accountable AI was a serious theme at SAS Innovate in sunny Florida. Townsend delivered a presentation in the course of the opening session the place he emphasised how belief is central to significant relationships and civil societies however warned that AI creates a chance to erode that belief in some ways.

“We need to be certain that we’re being moral by design in constructing trustworthiness into the platform to allow our clients to construct compliant, accountable AI with SAS,” he stated on stage. “I consider now we have probably the most complete reliable AI platform for information scientists and builders on the planet, bar none.”

Reggie Townsend presents on stage at SAS Innovate in Orlando. (Supply: SAS)

A Dedication to Accountable AI

Townsend defined how a 12 months in the past, SAS formalized its dedication to accountable AI innovation by establishing a set of knowledge ethics rules which have helped to anchor the corporate throughout this time of speedy AI innovation. The rules guiding the Knowledge Ethics Apply are human centricity, transparency, inclusivity, privateness and safety, robustness, and accountability.

A method Townsend’s workforce works in the direction of these rules is by growing ongoing inner coaching for all SAS staff. This coaching entails danger administration methods and strategies to determine what Townsend calls a degree of cultural fluency and behaviors round accountable AI all through the corporate.

In his presentation, Townsend famous the purpose of the coaching was to place folks in the very best place to acknowledge and reply to AI moral danger in as near real-time as doable, ideally on the level of transaction.

“The coaching begins with our rules,” Townsend informed Datanami in an interview. He stated the primary a part of that journey concerned getting folks on the identical web page about what accountability actually means and permitting them to undergo use circumstances of their very own the place they have to face the tensions that exist between AI capabilities and accountability.

“We’re speaking about ensuring that we’re ready to be held to account for sure capabilities. ‘Is that what you need to disclose within the midst of a gross sales dialog or consulting engagement? What are the the explanation why you’d? And what are among the the explanation why you wouldn’t?’” he stated. “So, it’s much less about giving folks specific instruction past the definitions however placing folks into precise conditions to must grapple with a few of these conundrums, if you’ll.”

(Phonlamai-Picture/Shutterstock)

Creating a Frequent Information Round AI

SAS is engaged on growing exterior coaching surrounding accountable AI, as effectively. Townsend says SAS clients worth the corporate’s perspective within the AI house, not simply relating to the know-how, but in addition the operational and regulatory facets. As an alternative of merely coaching clients to make use of the SAS Viya platform, Townsend needs to contribute to the widespread understanding round AI.

“We need to be part of that dialog and be one of many locations that people can go to say, ‘Okay, effectively, what is that this factor all about?’ You shouldn’t must be an information scientist to understand that. We then need to affect those that would attend with the rules we maintain ourselves to. One would possibly say, ‘These are the SAS rules.’ Effectively, a whole lot of the language that we use is widespread language that will get used somewhere else as effectively. So, it’s not a lot the rules themselves, however it’s how these rules get actuated, as a result of it’s the tradition that makes the distinction.”

He continued, “That’s the method we need to assist folks undergo–to start to create their very own rules associated to AI after which work out their ‘why’ behind them.”

SAS Emphasis on Mannequin Governance

Townsend’s position on the Nationwide Synthetic Intelligence Advisory Committee is to offer suggestions on the present state of U.S. AI competitiveness, the state of science round AI, and AI workforce points. On the time of his appointment to this committee final 12 months, Townsend acknowledged the pressing want for authorized, technical, social, and tutorial frameworks to capitalize on the promise of AI whereas mitigating the peril. He and his colleagues present perception into 5 major areas: bias, AI analysis and improvement, worldwide improvement, workforce readiness, and authorities AI deployment.

Throughout our interview, I requested Townsend to determine the world of AI analysis and improvement the place SAS takes probably the most progressive and forward-thinking strategy.

(INGARA/Shutterstock)

“Considered one of our areas of explicit be aware is governance. What we’re doing round mannequin operations and governance is fairly vital,” he answered. Townsend defined that the corporate’s inclusive strategy relating to mannequin governance gives a singular worth proposition within the AI house. Whether or not AI fashions are created with SAS, Python, R, or open supply platforms, these algorithms must be repeatedly monitored with a constant governance construction, he argues.

“We shouldn’t discriminate in terms of fashions. Simply convey all of the fashions to our repository, and we’ll govern these fashions over time,” he stated. “As a result of finally, all the gamers in a corporation want to know mannequin decay and explainability in the identical method.”

The SAS Viya platform accommodates mannequin administration and governance options reminiscent of mannequin playing cards, which is a functionality that provides technical and non-technical customers a complete understanding of the mannequin’s accuracy, equity, explainability, and drift. There are additionally bias assessments to spotlight the potential for bias, in addition to capabilities surrounding information lineage and pure language insights.

Knowledge for Good

These built-in governance capabilities are a part of the dedication SAS has proven to being moral by design, however there are additionally real-world tasks being delivered to life by this philosophy.

Townsend talked about that the corporate not too long ago moved its Knowledge for Good workforce from the advertising and marketing division into the Knowledge Ethics Apply. Townsend says that the Knowledge for Good workforce is essentially targeted on telling tales about how information is used for the advantage of humanity and that the workforce will nonetheless concentrate on telling tales with an emphasis on human-centered AI.

The Knowledge for Good workforce is a method via which staff can provide their abilities on a non-job-specific foundation. A facet of that is the Mission Market, an inner portal the place staff can discover tasks to work on based mostly on their abilities. Townsend gave an instance of a undertaking to assist a municipality with citizen providers the place folks with information evaluation or visualization abilities could also be wanted. That is an worker retention instrument, in addition to a chance for workers to share and refine their abilities in tasks that aren’t simply associated to their day-to-day jobs, he famous.

This 12 months, the Knowledge for Good workforce is specializing in tasks associated to monetary providers, AI’s impacts on weak populations, justice and public security subjects associated to AI, and healthcare-related AI, Townsend stated. One undertaking of be aware is a crowd-sourced information labeling effort within the Galapagos Islands the place citizen information scientists are serving to determine sea turtles to help of their conservation. (Search for a function on that undertaking, coming quickly.)

The Subsequent Steps

Towards the top of our interview, I reminded Townsend of one thing he emphasised in the course of the press convention earlier that day. Within the room filled with media professionals, he informed us, “This notion of accountable AI additionally has to incorporate accountable rhetoric about AI,” and that decreasing the temperature in our reporting as journalists is vital for imparting belief and never scaring folks about AI.

The rise of ChatGPT represents a time when AI capabilities have gone mainstream, and extra folks than ever are discussing its implications. As residents, whether or not we’re information scientists or AI specialists, authorities officers, journalists, or not one of the above, each individual has the potential to be impacted by AI. As an alternative of contributing clickbait articles that concentrate on the extra perilous prospects of the know-how, Townsend says all of us share within the accountability of understanding the nuance of AI and having the ability to speak about its substantial dangers proper together with its advantages.

“All of us share this accountability. It will possibly’t be about ‘What’s the federal government going to do? What are the tech corporations going to do?’ It needs to be about ‘What are we going to do?’ As a result of we’re having a dialog for the primary time in human existence about capabilities that really feel like they’re extra clever than us. And for all of our existence, we’ve prided ourselves on being probably the most cognitively superior creature on the planet, in order that unsettles us,” he stated.

When requested what the dialog round AI would possibly sound like sooner or later, Townsend stated he doesn’t but know the reply to that, however his desired final result could be to crystalize a layman’s understanding of AI that will allow everybody to make a willful selection about the way it will or is not going to influence their lives.

“The analogy that I take advantage of is the electrical energy that comes out of those partitions. Each of us know, and we didn’t must go to highschool to study this, to not take a fork and stick it within the outlet,” he stated, noting that this data is inherent with out the have to be an electrician or know the finer particulars of energy technology.

“We have to be sure that there’s a base degree of ‘don’t stick a fork within the wall’ information about AI. I don’t know after we’ll get there. However I do know, to be able to get there, we have to begin educating, and it takes an organization like ours to be part of that training.”

Associated Gadgets:

Individuals to Watch 2023 – Reggie Townsend

SAS Innovate Convention Showcases Investments, Partnerships, and Benchmarks

Altman’s Suggestion for AI Licenses Attracts Blended Response

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles