In this contributed article, Philip Miller, a Customer Success Manager for Progress, discusses the emergence of data bias in AI and what steps business leaders and IT teams can take to avoid it. Specifically, Philip discusses the ways in which data bias arises due to lackluster datasets and how human oversight paired with proper data entry can better improve your AI performance.
16-Year-Old Data Scientist Creates R Shiny App to Champion Gender Equality in Sports Media Coverage of NCAA Women’s Basketball
Nathaniel Yellin, a 16-year-old student, has concluded a new study that reveals the significant gender bias in the sports media coverage of female athletes and, in particular, college basketball players. Yellin has pursued his passions for sports, data science and inspiring change through the creation of an organization and interactive R Shiny application SIDELINED.
How Organizations Can Avoid Data Bias in the Age of AI
In this contributed article, Ken Payne, Hyland’s Product Manager for Automation, discusses how organizations can avoid the associated risks when working to implement AI, mitigate data bias, improve data relevance, increase transparency, bolster trust and ultimately lead you on the path to more ethical AI.
7 Reasons For Bias In AI and What To Do About It
In this contributed article, Alexandra Ebert, Chief Trust Officer at MOSTLY AI, discusses 7 important ways that machine learning models become biased along with techniques for prevention. The power of AI is that it can scale processes so effortlessly that they can amplify both the good and the bad way beyond the human scale.
Genesis of a Model Intelligence Platform – Truera
In this start-up highlight piece, we discuss how a CMU professor and his former grad student are ushering in a new era of responsible AI, and helping companies address bias in their AI models. This is a short story of the genesis of Truera.
AI-driven Platform Identifies and Remediates Biases in Data
Synthesized has released the Community Edition of its data platform for Bias Mitigation. Released as a freemium version, the offering incorporates AI research and cutting-edge techniques to enable any organization to quickly identify potential biases within their data and immediately start to remediate these flaws.
Addressing AI Trust, Systemic Bias & Transparency as Business Priorities
Our friend Dr Stuart Battersby, CTO of Chatterbox Labs (an Enterprise Al Company), reached out to us to share how his company built a patented AI Model Insights Platform (AIMI) to address the lack of explainability & trust, systemic bias and vulnerabilities within any AI model or system.
How Do We Make It Easier To Trust?
In this contributed article, Sean Beard, Vice President at Pariveda Solutions, discusses how automating trust presents a new set of challenges to an organization due to the subject nature of trust. Businesses must develop a better understanding of bias in their data and how different business contexts are applied to that data. With ample misinformation surrounding us all of the time, using AI machine learning, and human oversight, organizations can gain a clearer understanding of what information they can trust and what they can’t.
Researcher Develops Algorithm to Make Artificial Intelligence Fairer
A researcher from Queen’s University Belfast has developed an innovative new algorithm that will help make artificial intelligence (AI) fairer and less biased when processing data. Companies often use AI technologies to sift through huge amounts of data in situations such as an oversubscribed job vacancy or in policing when there is a large volume of CCTV data linked to a crime.
DataRobot Reports that Nearly Half of AI Professionals are Very to Extremely Concerned about AI Bias
DataRobot, the leader in enterprise AI, released new research revealing that nearly half (42%) of AI professionals across the U.S. and U.K. are “very” to “extremely” concerned about AI bias. The research — based on a survey of more than 350 U.S. and U.K. executives involved in AI and machine learning purchasing decisions — uncovered that “compromised brand reputation” and “loss of customer trust” are the most concerning repercussions of AI bias, prompting 93% of respondents to say they plan to invest more in AI bias prevention initiatives in the next 12 months.