I made the 2023 MIT Tech Review 35 Innovators Under 35 global list!

Neural networks often make decisions that even the designers of the systems don’t fully understand. This “explainability” problem makes it harder to fix flaws such as biased or inaccurate results. Daniel Omeiza, 31, is working to solve the explainability problem in self-driving cars; he has developed techniques that can provide visual and language-based explanations for engineers and ordinary human drivers alike about why a car reacts in a specific way. His most recent work automatically generates commentary about a car’s actions—including auditory explanations, driving instructions, and a visual graph—by using a decision-tree technique that can parse data from the car’s perception and decision planning systems. Omeiza’s model, which is flexible enough to work with different autonomous cars, can either use the car’s previously recorded data or process information about the actions of a vehicle during operation to generate likely explanations. He is currently working on integrating traffic laws into his system. Omeiza is motivated by a desire to improve the safety of self-driving cars and help AI engineers code systems more efficiently. He hopes that his model increases consumer trust in AI technology. “Deep-learning models sometimes alienate people who need explanations to trust the system,” he says. See link…