robot fighting human
Expert Roundups

10 Strategies for Mitigating Risks Associated With Artificial Intelligence

Recently, I wrote an article for Entrepreneur on the top fears and dangers of generative AI. In it, I shared a number of concerns that people have, and potential solutions. Generally, I’m of the opinion that competition and the crowdsourcing of ideas will produce the best outcomes. There has been a lot of chatter about regulating AI in response to potential hazards. While I believe that people are well-intentioned, I don’t see how this produces a good outcome. Like the war on drugs, over-regulation tends to push bad behavior underground rather than stopping it. But, I’m also a big believer in platforming a diverse set of ideas from among people who believe differently than myself. As such, I put together this roundup of expert advice on how to mitigate risks associated with artificial intelligence. I agree with some of the ideas and disagree with others. Take a look, and I’m sure you’ll find nuggets of great information and ideas to help you on your own path of AI discovery. The included ten industry leaders are CEOs, CMOs, and Co-founders. From practicing responsible AI usage to focusing on ethics and workforce diversity, these experts share their top strategies for addressing potential risks associated with artificial intelligence. Are they right? You be the judge.

Practice Responsible AI Usage

Kendall Thomas, Chief Information Security Officer, Emeritus

The popularity of AI and tools like ChatGPT necessitates caution. Reasons include:

1. Verification: AI may provide inaccurate or outdated information, requiring users to verify it independently.

2. Data Privacy: ChatGPT learns from user input, raising concerns about confidentiality. Entering personal, proprietary, or regulated data can compromise privacy.

3. Lack of Sourcing: AI systems often don’t disclose information sources, making it difficult to trace and verify data origins, affecting accountability and documentation.

While AI tools like ChatGPT offer benefits, users must be cautious. Enter information into AI prompts carefully, avoiding confidential or regulated data (e.g., PII or CUI). Adopt responsible practices to leverage AI’s potential while protecting sensitive information and ensuring accountability.

Kendall Thomas, Chief Information Security Officer, Emeritus

Emphasize Rigorous Testing and Validation

Josh Amishav, Founder and CEO, Breachsense

One effective strategy for mitigating potential risks associated with artificial intelligence is to conduct rigorous testing and validation. Implement comprehensive testing processes to ensure the reliability, accuracy, and safety of AI systems. Thoroughly assess AI algorithms, models, and data inputs to identify and mitigate biases, vulnerabilities, or unintended consequences. Ongoing monitoring and auditing of AI systems can help identify and rectify issues before they escalate. 

By placing a strong emphasis on testing and validation, organizations can enhance the quality and trustworthiness of their AI systems, reducing potential risks and ensuring that AI technologies perform as intended.

Josh Amishav, Founder and CEO, Breachsense

Allocate Resources for Human Oversight

Max Schwartzapfel, CMO, Schwartzapfel Lawyers

Allocating enough resources for regular human monitoring and corrective action-taking is one best practice. AI has a great deal of risk in the compliance and regulatory areas to contend with. On top of that, competitors can get very far ahead very quickly in this sector too. Always keep decisions actionable to combat this, but ensure monitoring is in place to stay within regulatory requirements.

Max Schwartzapfel, CMO, Schwartzapfel Lawyers

Use AI as a Supplement

Kirkland Gee, Co-founder, Perfect Extraction

Use AI as a supplement, not as an integral part of your processes. Artificial intelligence is still an emerging industry, and it’s difficult to predict how the technology will integrate long term. 

For example, while AI can quickly produce written articles for blogs, there’s every chance that Google may eventually penalize this type of content. If you have come to rely on it too heavily, your business will be at risk. To be safe, use AI as an assistant and not as a major part of your process.

Kirkland Gee, Co-founder, Perfect Extraction

Combine Human Oversight and Checking

Julia Kelly, Managing Partner, Rigits

One strategy for mitigating potential risks associated with artificial intelligence (AI) is to use a combination of human oversight and checking. This could involve having humans check the output of AI systems through validation and verification methods, such as post-deployment reviews or continuous testing. 

For example, an organization could employ non-experts to review a sample of results from AI models to assess whether they are accurate, ethics compliant, and satisfactory. This would allow incorrect outputs to be identified quickly and addressed before being distributed more widely.

Julia Kelly, Managing Partner, Rigits

Develop and Enforce AI Safety Standards

Tarun Saha, Co-founder and CEO, StallionZo

One effective way to mitigate potential risks associated with artificial intelligence (AI) is to develop and enforce AI safety standards. These standards should be established by a consortium of experts in the field of AI, including representatives from academia, industry, and government. 

The standards would aim to ensure that AI systems are developed to be safe, reliable, and transparent. This would require the development of guidelines for ethical and responsible use of AI in various domains, such as healthcare, transportation, and finance. 

Moreover, to ensure compliance with these standards, AI systems would need to undergo rigorous testing, including simulated and real-world scenarios, before deployment. A comprehensive system of independent auditing and regulation would also be necessary to ensure continued compliance with these standards.

Tarun Saha, Co-founder and CEO, StallionZo

Promote International Collaboration

Tom Miller, Director of Marketing, Fitness Volt

Given the global nature of AI, international collaboration and standardization efforts are critical for risk mitigation. Global governments, organizations, and researchers should work together to establish global norms, standards, and best practices for AI development and deployment. 

This includes sharing knowledge, skills, and data while encouraging collaboration to address common issues such as bias, privacy, and safety. I believe that international collaboration can help to promote responsible AI practices, avoid the misuse of AI technology, and provide a consistent and ethical approach to AI across borders.

Tom Miller, Director of Marketing, Fitness Volt

Implement AI Regulation and Governance

Kenny Kline, President and Financial Lead, BarBend

In my opinion, governments and regulatory agencies should build comprehensive frameworks for AI regulation and governance. These frameworks should include ethical norms, data privacy legislation, safety requirements, and systems for responsibility. 

Policymakers can ensure that AI systems are developed and utilized responsibly by enacting clear and enforceable laws, reducing possible hazards, and protecting the interests of individuals and society as a whole.

Kenny Kline, President and Financial Lead, BarBend

Ensure AI Transparency and Accountability

Brenton Thomas, CEO, Twibi

One strategy for mitigating potential risks associated with artificial intelligence is to ensure that AI systems are transparent and accountable. This means that the systems should be designed in a way that allows people to understand how they work and to hold them accountable for their actions.

There are a number of ways to ensure that AI systems are transparent and accountable. One way is to use explainable AI techniques. Explainable AI techniques allow people to understand how AI systems make decisions. This can help to identify potential biases in the system and to correct them.

Another way to ensure that AI systems are transparent and accountable is to use auditing and monitoring tools. Auditing and monitoring tools can help to identify potential problems with AI systems. They can also help to track the performance of AI systems and to identify areas where they need to be improved.

Brenton Thomas, CEO, Twibi

Focus on Ethics and Workforce Diversity

Timothy Allen, Sr. Corporate Investigator, Corporate Investigation Consulting

In my opinion, AI systems should be built and developed with ethical issues in mind. Organizations should create ethical review committees to analyze AI initiatives, uncover any biases, and ensure fairness. Efforts should also be made to diversify the AI workforce in order to reduce the danger of bias and discrimination in AI systems.

Timothy Allen, Sr. Corporate Investigator, Corporate Investigation Consulting

Dennis Consorte helps startups and small cap public companies make an impact through digital marketing, publicity, and team building.