For the past few months, Rydesafely have been churning out customer discovery calls one after another strictly abiding by the principles of the Mom test. This strategy was tightly coupled with our intent to squeeze the most of our respective experiences of facing the brunt of not being able to deliver machine-learning-based software that can work reliably all the time. Now with the backing of EF, and some luck, I am happy to reveal ourselves as Rydesafely to the world.
What the heck is the problem I am talking about:
I had the first-hand experience of this problem while working on a NASA competition called SRCP2 which intended to build autonomy solutions for their rovers. Here, my earth-based fine-tuned ML models on data collected from the lunar surface would always run into some false positives and false negatives that would throw off the whole underlying positioning estimates. No matter how many times I did the tedious data-collection, there would always be some orientation, that would spoil the party. At a different point in the space-time fabric of our universe, Tom saw a similar problem rearing its ugly head while he was working at JLR where even a big Tier-1 company failed to deliver such a system in 8 months time period. Fortuitously, our stars aligned and we met at the same point in the space-time continuum in Toronto.
Crux of the problem:
The problem we are solving through Rydesafely is that the flip side of ML-based opportunity in the automotive space is that existing processes for fault analysis, failure-mode identification, the well-established ISO 26262 standards, all fall short of delivering on the indispensable safety guarantees. Traditional software engineering methodologies are not suited for testing machine learning systems. The reason being they were made to test where logic is known beforehand, not something that is supposed to figure out automatically. Metrics like coverage has no meaning when it comes to an ML model. The problems in such software can take millions of miles of driving before bubbling up making it an infeasible approach to validate such systems. Simulators can help but have their own limitations around fidelity and being limited to only mimicking the known situations. The consequences are that it has become unprofitable and hard for such solutions to be built and scaled, mainly because it has been hard to demonstrate safety. In some cases, this quirkiness of neural networks has pushed the hopes and aspirations of smart commutation and transport to the brink of a complete end. However, not all is lost and there is hope at the end of the tunnel.
Consider joining us:
As we take on this hard challenge that eventually annually kills 1.35 million people and causes a loss of about 1.7 trillion$, we would need a lot of helping hands. If your inner calling prompts you to find a problem that can potentially change the way humans go about their daily lives in a big positive way, then we might be a good place for you. More information is here.