Home Technology & Digitalisation How can organisations prepare for generative AI?

How can organisations prepare for generative AI?

Organisations across the board significantly underappreciate AI risk
Partner at Baker & McKenzie LLP
Partner at Baker & McKenzie LLP
Partner at Baker & McKenzie LLP
Partner at Baker & McKenzie LLP

Amid recent hype around ChatGPT and generative artificial intelligence (AI), many are eager to harness the technology’s increasingly sophisticated potential.

However, findings from Baker McKenzie’s 2022 North America AI survey indicate that business leaders may currently underappreciate AI-related risks to their organisation. Only 4% of C-suite level respondents said they consider the risks associated with using AI to be “significant,” and less than half said they have AI expertise at the board level.

These figures spotlight a concerning reality: many organisations are underprepared for AI, lacking the proper oversight and expertise from key decision-makers to manage risk. And if unaddressed, organisational blind spots around the technology’s ethical and effective deployment are likely to overshadow transformative opportunities while causing organisations to lose pace with the explosive growth of the technology.

How is generative AI changing the risk landscape?

These days, AI-related progress and adoption is happening at an exponential rate – some argue too quickly.

While this exponential growth has renewed focus on the use of AI, the reality is that academics, scientists, policy-makers, legal professionals and others have been campaigning for some time now for the ethical and legal use and deployment of AI, particularly in the workplace where existing applications of AI in the HR function are abundant (e.g., talent acquisition, administrative duties, employee training).

According to our survey, 75% of companies already use AI tools and technology for hiring and HR purposes.

In this new phase of generative AI, core tenets around AI adoption – such as governance, accountability and transparency – are more important than ever, as are concerns over the consequences of poorly deployed AI.

For example, unchecked algorithms can result in biased and discriminatory outcomes, perpetuating inequities and dampening workforce diversity progress. Data privacy and breaches are another concern, easily occurring through the non-anonymisation and collection of employee data.

Generative AI has also given way to new IP considerations, raising questions around ownership of both inputs and outputs from third-party programmes and subsequent copyright infringement concerns.

Broadly, we have seen governments and regulators scrambling to implement AI-related legislation and regulatory enforcement mechanisms. In the US, a key focus of emerging legislation will be on the use case of AI in hiring and HR-related operations.

Litigation, including class actions, is also on the horizon. We are already seeing the first wave of generative AI IP litigation in the US, and these early court decisions are shaping the legal landscape absent of existing regulation.

Organizations who implement generative AI also should assume that data fed into AI tools and queries will be collected by third-party providers of the technology. In some cases, these providers will have rights to use and/or disclose these inputs.

As employers look to equip their workforces with generative AI tools, are they putting sensitive data and trade secrets at risk? In short, yes. All in all, each new development seems to open questions faster than organisations, regulators and courts can answer them.

How can organisations enhance their AI preparedness?

Generative AI is changing the paradigm, and risks around specific use cases will continue to arise. To stay ahead, organisations will need to move current approaches beyond siloed efforts and bring together discrete functions under the umbrella of a strong governance framework.

While many organisations rely on data scientists to spearhead AI initiatives, all relevant stakeholders, including legal, the C-suite, boards, privacy, compliance and HR, need to be involved throughout the entire decision-making process.

This representation gap was made clear in our survey findings. Currently, only 54% of respondents said their organisation involves HR in the decision-making process for AI tools, and only 36% of respondents said they have a Chief AI Officer (CAIO) in place.

In this high-risk environment, the CAIO will play a critical role in ensuring relevant governance and oversight is in place at the C-Suite level and involve HR in training and fostering a cross-functional AI team.

Hand in hand with this, organisations should prepare and follow an internal governance framework that accounts for enterprise risks across use cases and allows the company to efficiently make the correct compliance adjustments once issues are identified.

The risk for companies with no AI governance structure and a lack of oversight from key stakeholders – or ones that rely wholesale on third-party tools – is the use of AI tools in a way that creates organisational legal liability (e.g., discrimination claims).

Virtually all decision-making, whether AI-based or otherwise, creates bias. Companies that use these tools must develop a framework that identifies an approach to assessing bias and a mechanism for testing and avoiding unlawful bias, as well as ensuring relevant data privacy requirements are met.

Efforts to combat bias should be further supported by effective measures for pre and post-deployment testing.

Companies who deploy AI must also ensure that there are processes in place that provide a clear understanding of the data sets being used, algorithmic functionality and technological limitations, as proposed legislation will likely include reporting requirements.

The final outlook for AI

The takeaway is simple: AI is being widely and quickly adopted and provides many benefits. But it is being rolled out and developed so rapidly that strategic oversight and governance become even more critical for its responsible use and risk mitigation.

Many organisations both are unprepared for AI and underappreciate the risks, making the willingness to deploy this technology without proper guardrails concerning.

Fortunately, by establishing strong governance and oversight structures, organisations can withstand these technological tides no matter where they are in their AI journeys.

Beyond this, the longer-term solution to managing AI-related risk will rely on informed stakeholders across legal, regulatory and the private sector joining forces to advance legislation, codes of practice or guidance frameworks that recognise both the opportunities and risks the technology presents.

With a secure framework in place, organisations can enable themselves to deploy AI technology and harness its benefits more confidently.

This article was first published on the World Economic Forum and can be read here.

Print Friendly, PDF & Email
Krissy Katzenstein
Partner at Baker & McKenzie LLP

Krissy Katzenstein is a partner in the Employment & Compensation Practice Group in Baker McKenzie’s New York office. Krissy represents employers in a wide range of employment disputes, with a focus on class and collective actions involving systemic discrimination as well as federal and state agency investigations of systemic discrimination and harassment claims. Krissy was named a “Rising Star” in Employment Law by Law360 in 2019.

Bradford Newman
Partner at Baker & McKenzie LLP

Bradford Newman is a litigation partner resident in Baker McKenzie's Palo Alto Office and chair of the North America Trade Secrets Practice. According to Chambers USA, Brad is a "recognised authority on trade secrets cases" who "is valued for his tenacious, intelligent and thoughtful approach to trade secrets matters." Bradford regularly serves as lead trial counsel in cases with potential eight and nine-figure liability, and has successfully litigated (both prosecuting and defending) a broad spectrum of trade secrets cases in state and federal courts throughout the country.

Robin J. Samuel
Partner at Baker & McKenzie LLP

Robin Samuel is a partner in Baker McKenzie’s Employment and Compensation Practice Group, leader of the California Labor & Employment practice group, co-chair of the Firm’s Workforce Redesign service line, and Steering Committee member for the North American Employment and Compensation practice.

Julia M. Wilson
Partner at Baker & McKenzie LLP

Julia Wilson is a partner in Baker McKenzie's Employment & Compensation team in London and co-chair of the Firm's Workforce Redesign client solution. Julia also leads the employment data privacy practice in London. Julia advises multinational organisations on a wide range of employment and data protection matters. She is highly regarded by clients, who describe her as a “standout” performer who "knows how we think."

You may also like