UCLA Archives - insideAI News https://insideainews.com/tag/ucla/ Illuminating AI's Frontiers: Your Go-To News Destination. Fri, 12 Jan 2024 22:18:36 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://insideainews.com/wp-content/uploads/2024/06/iain-favicon.png UCLA Archives - insideAI News https://insideainews.com/tag/ucla/ 32 32 136462205 Generative AI Models Are Built to Hallucinate: The Question is How to Control Them https://insideainews.com/2024/01/16/generative-ai-models-are-built-to-hallucinate-the-question-is-how-to-control-them/ https://insideainews.com/2024/01/16/generative-ai-models-are-built-to-hallucinate-the-question-is-how-to-control-them/#respond Tue, 16 Jan 2024 11:00:00 +0000 https://insidebigdata.com/?p=34553 In this contributed article, Stefano Soatto, Professor of Computer Science at the University of California, Los Angeles and a Vice President at Amazon Web Services, discusses generative AI models and how they are designed and trained to hallucinate, so hallucinations are a common product of any generative model. However, instead of preventing generative AI models from hallucinating, we should be designing AI systems that can control them. Hallucinations are indeed a problem – a big problem – but one that an AI system, that includes a generative model as a component, can control.]]> https://insideainews.com/2024/01/16/generative-ai-models-are-built-to-hallucinate-the-question-is-how-to-control-them/feed/ 0 34553 Research Highlights: Using Theory of Mind to improve Human Trust in Artificial Intelligence https://insideainews.com/2022/03/04/research-highlights-using-theory-of-mind-to-improve-human-trust-in-artificial-intelligence/ https://insideainews.com/2022/03/04/research-highlights-using-theory-of-mind-to-improve-human-trust-in-artificial-intelligence/#respond Fri, 04 Mar 2022 14:00:00 +0000 https://insidebigdata.com/?p=28625 eXplainable Artificial Intelligence (XAI) has become an active research area both for scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, thus bringing transparency to the process. Results become more capable of being interpreted by both experts and non-expert end users alike. New research by a team of UCLA scientists is focused on boosting human trust in these increasingly common systems by greatly improving upon XAI. Their study was recently published in the journal iScience.]]> https://insideainews.com/2022/03/04/research-highlights-using-theory-of-mind-to-improve-human-trust-in-artificial-intelligence/feed/ 0 28625 UCLA DataFest Winners Announced, Presentations Posted https://insideainews.com/2020/06/16/ucla-datafest-winners-announced-presentations-posted/ https://insideainews.com/2020/06/16/ucla-datafest-winners-announced-presentations-posted/#respond Tue, 16 Jun 2020 13:00:00 +0000 https://insidebigdata.com/?p=24599 For the annual UCLA DataFest, student worked hard with data pertaining to the monumental challenge we are all facing: COVID-19. This year’s virtual version of ASA DataFest at UCLA brought forth unforeseen challenges and wonderful opportunities. This beloved tradition is generally a competition wherein groups of three to five students have just 48 hours to make sense of a huge data set and present their findings in five minutes, using just two slides. ]]> https://insideainews.com/2020/06/16/ucla-datafest-winners-announced-presentations-posted/feed/ 0 24599