When cars collide, the first question asked after confirming the health of passengers is, who is responsible? Self-driving cars test the limits of liability in the current legal framework, but even with manned vehicles, it’s important to eliminate uncertainties, and prove responsibility.
The question goes: Whose fault is it if a self-driving car gets into an accident? Is it the vehicle owner’s, or the automobile creator’s? As the owner does not make navigational decisions, do algorithm deficiencies and mechanical malfunctioning place the owner in a precarious position, or would the car’s manufacturer be liable for damages? Some Google employees say tickets should be given to their company rather than car owners, but another argument states responsibility should fall upon the car itself.
The basis for this argument rests upon the protection of owners from the possibility of frivolous lawsuits. Rather than subjecting owners to liability, the car is independently insured, facilitating payouts to victims through the separate legal entity of the car. A criticism of this argument, though, states the potential for massive evasion of responsibility by so doing.
As artificial intelligence becomes ingrained in every day life questions of responsibility and liability are sure to arise. When a machine makes a mistake who do we hold accountable? The person who programmed the machine, the person who was using the machine at the time of an accident, is it a social cost? The questions and issues will be answered as we move into an evermore automated
This debate will develop as self-driving cars continue to enter the mainstream, cars potentially being granted personhood. But liability of manned vehicles almost always falls upon individuals. When things get tricky, the personal injury lawyers at 1LAW can clarify ambiguity, help determine who’s liable and fight to properly cover damages and make you whole.