Trending News
Home / Technology / AI & IT / Fully Autonomous technologies driving Global driverless car revolution

Fully Autonomous technologies driving Global driverless car revolution

Autonomous vehicles will ease congestion, shorten commutes, reduce fuel consumption, slow global warming, enhance accessibility, liberate parking spaces for better uses, and improve public health and social equity. Analysts predict that by 2050 self-driving cars will save 59,000 lives and 250 million commuting hours annually and support a new “passenger economy” worth $7 trillion USD.

 

Most global auto manufacturers are actively developing autonomous-vehicle technology, including Google, General Motors, Ford, Volkswagen, Toyota, Honda, Tesla, Volvo, and BMW.  They are trying to make drivers obsolete, handing control of the wheel to a computer that can make intelligent decisions about when to turn and how to brake.

 

A self-driving car (also known as an autonomous car or driverless car) is a vehicle that uses a different number of sensors, radars, cameras, and artificial intelligence to travel to destinations without needing a human driver.

 

Autonomous driving levels 0 to 5

The National Highway Traffic Safety Administration adopted the Society of Automotive Engineers’ levels for automated driving systems, ranging from complete driver control to full autonomy.

What is an Autonomous Car? – How Self-Driving Cars Work | Synopsys

Level 0: Level 0 autonomy means everything from steering, brakes, throttle, power is controlled by driver (human) .

Level 1: This driver-assistance level means that most functions are still controlled by the driver, but a specific function (like steering or accelerating) can be done automatically by the car. Level 1 automation is a common feature in most of the current car models of major automakers, like Audi, BMW, and Mercedes-Benz.

Level 2: In level 2, at least one driver assistance system of “both steering and acceleration/ deceleration using information about the driving environment” is automated, like cruise control and lane-centering. It means that the “driver is disengaged from physically operating the vehicle by having his or her hands off the steering wheel AND foot off pedal at the same time,” according to the SAE. The driver must still always be ready to take control of the vehicle, however. Level 2 include Models, like Volvo Pilot Assist, Mercedes-Benz Drive Pilot, Tesla Autopilot, and Cadillac Super Cruise, have been supplied with level 2 automation features.

Level 3: Level 3 automation is referred to as conditional automation. Drivers are still necessary in level 3 cars, but are able to completely shift “safety-critical functions” to the vehicle, under certain traffic or environmental conditions. It means that the driver is still present and will intervene if necessary, but is not required to monitor the situation in the same way it does for the previous levels.

In level 3 automation, the autonomous cars driving system performs all the dynamic driving tasks with the expectation that the human driver will respond appropriately to a request to intervene. The dynamic driving task includes steering, breaking, accelerating, changing lanes, and monitoring the vehicle, along with responding to events happening on the road.

Following the SAE (Society of Automotive Engineers) International automated driving standards, cars with level 1-3 automation features have been considered under the market segment of semi-autonomous vehicles.

Level 4: This is what is meant by “fully autonomous.” Level 4 vehicles are “designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip.” However, it’s important to note that this is limited to the “operational design domain (ODD)” of the vehicle—meaning it does not cover every driving scenario.

Level 5: This refers to a fully-autonomous system that expects the vehicle’s performance to equal that of a human driver, in every driving scenario—including extreme environments like dirt roads that are unlikely to be navigated by driverless vehicles in the near future.

Challenges and technologies

There are several critical technologies behind safe and efficient autonomous-vehicle operation—smart sensors cameras, radar and lidar, AI and machine vision,  high-performance computing, network infrastructure, automotive-grade safety solutions, security, and privacy. All these technologies must integrate seamlessly to help ensure safe and successful autonomous-vehicle operations.

 

Fully autonomous driving still has many challenges such as localization, mapping, scene perception, vehicle control, trajectory optimization, and higher-level planning decisions that have yet to be fully solved by systems incorporated into a production platforms (e.g. offered for sale) for even a restricted operational space.

 

Every autonomous system that interacts in a dynamic environment must construct a world model and continually update that model. This means that the world must be perceived (or sensed through cameras, microphones and/or tactile sensors) and then reconstructed in such a way that the computer ‘brain’ has an effective and updated model of the world it is in before it can make decisions.

 

Driverless cars (and some drones) do this through a combination of sensors like LIDAR (Light Detection And Ranging), traditional radars, stereoscopic computer vision, and GPS technology to generate a high-definition 3D precision map of their environment. This when combined with high-resolution maps of the world allows it to drive safely to a destination while avoiding obstacles such as pedestrians, bicycles, other cars, medians, children playing, fallen rocks, and trees  and negotiate its spatial relationship with other cars.

 

The most basic of these is an omnidirectional array of cameras. Teslas, for example, have redundant front cameras that cover different visual depths and angles, so that they can simultaneously detect nearby lane markers, construction signs on the side of the road, and streetlights in the distance. Radar sensors, unimpeded by weather, track the distance, size, speed, and trajectory of objects that may intersect the vehicle’s path, and ultrasonic sensors offer close-range detection, which is particularly useful when parking.

 

LIDARS by steering the transmitted light, can generate a millimeter-accurate 3D representation of its surroundings that’s called a point cloud. These accurate point cloud images are compared with 3D maps of the roads known as prior maps stored in memory using well-known algorithms, aligning them as closely as possible. That makes it very easy to identify with sub-centimeter precision where the car is on the road. LIDAR also requires 3D SLAM software. In robotic mapping, SLAM (simultaneous localization and mapping) involves the computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of a specific point within it.

 

One of the key strengths of LiDAR is the number of areas that show potential for improvement. These include solid-state sensors, which could reduce its cost tenfold, sensor range increases of up to 200m, and 4-dimensional LiDAR, which senses the velocity of an object as well as its position in 3-D space. However, despite these exciting advances, LiDAR is still hindered by a key factor; its significant cost.

 

Thus the world model of a driverless car is much more advanced than that of a typical UAV, reflecting the complexity of the operating environment. The navigation for driverless cars is much more difficult. Driverless Cars not only need maps that indicate preferred routes, obstacles and no-go zones, but they must also understand where all nearby vehicles, pedestrians and cyclists are, and where all these are going in the next few seconds. The fidelity of the world model and the timeliness of its updates are the keys to an effective autonomous system.

 

Just as we have Siri and Google and mental maps, driverless cars tap into external sources of geospatial data. Standard GPS is accurate within several feet, but that’s not good enough for autonomous navigation. Industry players are developing dynamic HD maps, accurate within inches, that would afford the car’s sensors some geographic foresight, allowing it to calculate its precise position relative to fixed landmarks. Layering redundant forms of place-awareness could help overcome ambiguity or error in locally sensed data. Meanwhile, that sensor data would feed into and improve the master map, which could send real-time updates to all vehicles on the Cloud network.

 

A driverless car computer is required to track all the dynamics of all nearby vehicles and obstacles, constantly compute all possible points of intersection, and then estimate how it thinks traffic is going to behave in order to make a decision to act.

 

Autonomous Driving With Deep Learning

Autonomous cars have to match humans in decision making while multitasking between different ways of reading and understanding the world, while also distributing the sensors among different widths and depths of field and lines of sight (not to mention dashboards and text messages and unruly passengers). All this sensory processing, ontological translation, and methodological triangulation can be quite taxing.

 

Artificial intelligence powers self-driving vehicle frameworks. Engineers of self-driving vehicles utilize immense information from image recognition systems, alongside AI and neural networks, to assemble frameworks that can drive self-sufficiently. The neural networks distinguish patterns in the data, which is fed to the AI calculations. That data includes images from cameras for self-driving vehicles. The neural networks figure out how to recognize traffic signals, trees, checks, people on foot, road signs, and different pieces of any random driving environment.

 

Tesla (which, for now, insists that its cars can function without Lidar) has built a “deep” neural network to process visual, sonar, and radar data, which, together, “provide a view of the world that a driver alone cannot access, seeing in every direction simultaneously, and on wavelengths that go far beyond the human senses.”

 

Coming straight out of Stanford’s AI Lab Mountain View, Calif.-based Drive has adopted scalable deep-learning approach for working toward Level 4 autonomy (a self-driving system that doesn’t require human intervention in most scenarios).Drive sees deep learning as the only viable way to make a truly useful autonomous car in the near term, says Sameep Tandon, cofounder and CEO. “If you look at the long-term possibilities of these algorithms and how people are going to build [self-driving cars] in the future, having a learning system just makes the most sense. There’s so much complication in driving, there are so many things that are nuanced and hard, that if you have to do this in ways that aren’t learned, then you’re never going to get these cars out there.”

 

Reiley says. “We’re solving the problem of a self driving car by using deep learning for the full autonomous integrated driving stack—from perception, to motion planning, to controls—as opposed to just bits and pieces like other companies have been using for autonomy. We’re using an integrated architecture to create a more seamless approach.”Each of Drive’s vehicles carries a suite of nine HD cameras, two radars, and six Velodyne Puck lidar sensors that are continuously capturing data for map generation, for feeding into deep-learning algorithms, and of course for the driving task itself.

 

When you’re developing a self-driving car, the hard part is handling the edge cases. These include weather conditions like rain or snow, for example. Right now, people program in specific rules to get this to work. The deep learning approach instead learns what to do by fundamentally understanding the data. Says Tandon:The first step for Drive.ai is to get a vehicle out on the road and start collecting data that can be used to build up the experience of their algorithms. Deep-learning systems thrive on data. The more data an algorithm sees, the better it’ll be able to recognize, and generalize about, the patterns it needs to understand in order to drive safely.

 

“It’s not about the number of hours or miles of data collected,” says Tandon. “It comes down to having the right type of experiences and data augmentation to train the system—which means having a team that knows what to go after to make the system work in a car. This move, from simulation environments and closed courses onto public roads, is a big step for our company and we take that responsibility very seriously.”

 

Safety and security

Autonomous vehicles won’t gain widespread acceptance until the riding public feels assured of their safety and security, not only of passengers but also other vehicles and pedestrians. Achieving zero roadway deaths is necessary for universal adoption of autonomous driving and is the objective of the recently released U.S. National Roadway Safety Strategy.

 

Neural Propulsion Systems (NPS), a pioneer in autonomous sensing platforms, issued a paper in March 2022. The paper finds that zero deaths require sensing and processing a peak data rate on the order of 100 X 1012 bits per second (100 Terabits per second) for vehicles to safely operate under worst roadway conditions. This immense requirement is 10 million times greater than the sensory data rate from our eyes to our brains.

 

The paper also shows that sensing and processing 100 Tb/s can be accomplished by combining breakthrough analytics, advanced multi-band radar, solid state LiDAR, and advanced system on a chip (SoC) technology. Such an approach will allow companies developing advanced human driver assistance systems (ADAS) and fully autonomous driving systems to accelerate progress.

 

NPS achieved pilot scale proof-of-concept of the core sensor element required for zero roadway deaths at a Northern California airfield in December 2021. One reason for this successful historic event is the Atomic Norm, a recently discovered mathematical framework that radically changes how sensor data is processed and understood. Atomic Norm was developed at Caltech and MIT and further developed specifically for autonomous driving by NPS.

 

Road to full autonomy

Driverless cars are moving incrementally towards full autonomy. Many of the basic ADAS system building blocks such as automatic cruise control, automatic emergency braking, and lane-departure warning are already in place and under the control of a central computer that assumes responsibility for driving the vehicles. Increasing levels of capabilities starting from driver assistance to eventually fully autonomous capabilities requires the technologies to mature and prices to drop.

 

Tony Tung, Sales Manager for Mobileye Automotive Products and Services (Shanghai),  explained during  Symposium on Innovation & Technology how the road from Advanced Driver Assistance Systems (ADAS) to full autonomy depends on real-time ‘sensing’ of the vehicle’s environment; ‘mapping’ awareness and foresight; and decision-making ‘driving’ through assessing threats, planning maneuvers and negotiating traffic.

 

Fred Bower, distinguished engineer at the Lenovo Data Center Group, is also optimistic. “Advances in image recognition from deep-learning techniques have made it possible to create a high-fidelity model of the world around the vehicle,” he says. “I expect to see continued development of driver-assist technologies as the on-ramp to fully autonomous vehicles.”

 

Continued advancements in the technology and widespread acceptance of their use will require significant collaboration throughout the automotive and technology industry. Sciarappo sees three landmarks required to ensure the widespread acceptance of autonomous vehicles. “Number one: ADAS features need to become standard. Number two: there needs to be an industrywide effort to figure out how to measure and test the technology and its ability to avoid accidents and put us on that path to that autonomous future,” she says. “Number three: policymakers need to get on board and help figure out how to push this technology forward.”

 

In autonomous driving, key technologies are also approaching the tipping point: the object tracking algorithm, the algorithm used to identify objects near vehicles, has reached a 90% accuracy rate. Solid-state LiDAR (similar to radar but based on light from lasers) was introduced for high-frequency data collection of vehicle surroundings.

 

VoxelFlow’s™ lightning-fast 3D technology that scans the area around Autonomous Vehicles (AV) and Advanced Driver Assistance Systems (ADAS) vehicle at a radius of 40 meters with a response time of three milliseconds, much faster than today’s ADAS systems which take 300 milliseconds. STARTUP AUTOBAHN is powered by Plug and Play, and sponsored by Dalmer and the University of Stugart.

 

Lots of companies are building maps like this, including Alphabet’s Waymo, German automakers’ HERE, Intel’s Mobileye, and the Ford-funded startup Civil Maps. They send their own Lidar-topped cars out into the streets, harvest “probe data” from partner trucking companies, and solicit crowdsourced information from specially-equipped private vehicles; and they use artificial intelligence, human engineers, and consumer “ground-truthing” services to annotate and refine meaningful information within the captured images. Even Sanborn, the company whose incredibly detailed fire insurance maps anchor many cities’ historic map collections, now offers geospatial datasets that promise “true-ground-absolute accuracy.”  Uber’s corporate-facing master map, which tracks drivers and customers, is called “Heaven” or “God View”; the parallel software which reportedly tracked Lyft competitors was called “Hell.”

 

Because these technologies have quickly become viable, major technology companies like Google, Nvidia, Intel, and BMW are accelerating efforts to develop self-driving vehicles.

 

Baidu has opened up its driverless car technology for auto makers to use as it aims to be the default platform for autonomous driving in a bid to challenge the likes of Google and Tesla. The Chinese internet giant said that the new project named Apollo, will provide the tools carmakers would need to make autonomous vehicles. There would be reference designs and a complete software solution that includes cloud data services. Essentially, Baidu is trying to become to cars what Google’s Android has become to smartphones – an operating system that will power a number of driverless vehicles.

 

“An open, innovative industry ecosystem initiated by Baidu will accelerate the development of autonomous driving in the US and other developed automotive markets,” Qi Lu, chief operating officer at Baidu, said in a press release.

 

Sciarappo points to the Responsibility-Sensitive Safety (RSS) framework, a safety standard Intel has developed, to help drive this level of acceptance. “The RSS framework is a way for us to start talking about the best practices for keeping cars in safe mode.” While safe on-road operations are the primary aspect of autonomous-vehicle safety and security, the potential for hacking a self-driving vehicle is another key concern.

 

Computing power that could accelerate arrival of driver less cars

Hod Hasharon-based Valens showcased a new chipset that can transmit up to 2 Gbps of data over a single 50-foot cable. Autonomous vehicles require a lot of data for all those cameras, sensors and radar units (not to mention the in-car entertainment system) to work – and the data has to move fast. Current in-car cabling tops out at only 100 Mbps, so the Valens solution is significantly faster. Valens is also using standard unshielded cables, which are cheaper and lighter.

 

Tech giant Nvidia has unveiled a “plug and play” operating system that would allow car companies to buy the computer power needed to process the complex task of real world driving — without the need for a driver to touch the steering wheel or the pedals. Nvidia says its invention of “the world’s first autonomous machine processor” is in the final stages of development and will be production ready by the end of the year.

 

The boss of Nvidia, Jensen Huang, says the most important aspect of autonomous car computer power is not the ability to operate the vehicle but the processing speed needed to double check the millions of lines of data — while detecting every obstacle — and then enabling one computer to make the right decision when the other misdiagnoses danger.

 

Nvidia’s solution is a super-fast computer chip that duplicates every piece of data — gathered from cameras, GPS, lidar and radar sensors — required to make a decision in an autonomous car. It has “dual execution, runs everything twice without consuming twice the horsepower,” he said. “If a fault is discovered inside your car it will continue to operate incredibly well”.Unveiling the “server on a chip” after pulling it out of his back pocket, Mr Huang said autonomous car technology “can never fail because lives are at stake”. However, he said the road to driverless cars was “incredibly complex” because the car must “make the right decision running software the world has never known how to write”.

 

Nvidia is working on two types of autonomous tech: “level four” for cars with drivers who may need to take control, and “level five” cars dubbed “robot taxis” that don’t require a driver. The tech company says it is also working on systems that will give drivers voice control to open and close windows or change radio stations, but also track eye movement to monitor fatigue.Nvidia has also developed virtual reality technology to create a simulator to initially test its autonomous car software off the streets. It is able to replicate or create dangerous scenarios to “teach” the car new evasive manoeuvres or to detect danger earlier.

 

Network infrastructure

Industry players are developing dynamic HD maps, accurate within inches, that would afford the car’s sensors some geographic foresight, allowing it to calculate its precise position relative to fixed landmarks. Yet achieving real-time “truth” throughout the network requires overcoming limitations in data infrastructure. The rate of data collection, processing, transmission, and actuation is limited by cellular bandwidth as well as on-board computing power. Mobileye is attempting to speed things up by compressing new map information into a “Road Segment Data” capsule, which can be pushed between the master map in the Cloud and cars in the field. If nothing else, the system has given us a memorable new term, “Time to Reflect Reality,” which is the metric of lag time between the world as it is and the world as it is known to machines.

 

Rapid and consistent connectivity between autonomous vehicles and outside sources such as cloud infrastructure ensures signals get to and from the vehicles more quickly. The emergence of 5G wireless technology, which promises high-speed connections and data downloads, is expected to improve connectivity to these vehicles, enabling a wide range of services, from videoconferencing and real-time participation in gaming to health care capabilities such as health monitoring.

 

There are several protocols under which autonomous vehicles communicate with their surroundings. The inclusive term is V2X, or vehicle to everything, which includes:

  • Vehicle-to-infrastructure communication, which allows for data exchange with the surrounding infrastructure to operate within the bounds of speed limits, traffic lights, and signage. It can also manage fuel economy and prevent collisions.
  • Vehicle-to-vehicle communication, which permits safe operations within traffic situations, also working to prevent collisions or even near misses.

Autonomous-vehicle technology resides largely onboard the vehicle itself but requires sufficient network infrastructure, according to Genevieve Bell, distinguished professor of engineering and computer science at the Australian National University and a senior fellow at Intel’s New Technology Group. Also necessary are a road structure and an agreed-on set of rules of the road to guide self-driving vehicles. “The challenge here is the vehicles can agree to the rules, but human beings are really terrible at this,” Bell said during a presentation in San Francisco in October 2018.

 

Vehicle Communications

As automotive technologies continue to advance with vehicles becoming increasingly connected and autonomous, cellular vehicle-to-everything (C-V2X) technology is a key enabler for connected cars and the transportation industry of the future. Alex Wong, Hong Kong Director of Solution Sales for leading global information and communications technology (ICT) solutions provider Huawei, explained the latest C-V2X advancements; with enhanced wireless capabilities including extended communication range, improved reliability and transmission performance “enabling vehicles to talk with each other and make our transportation systems safer, faster and more environmental friendly”.

Improving Self-Driving Car Safety And Reliability With V2X Protocols | by Vince Tabora | Self-Driving Cars | Medium

Israeli vehicle chip-maker Autotalks developed “vehicle to everything” communication technology, which can help self-driving and conventional cars avoid collisions and drive through hazardous roads. V2X communication alerts the autonomous vehicle about objects it cannot directly see (non-line-of-sight), which is vital for safety and facilitates better decisions by the robot car.

 

 

 

Human machine Interation

CAVs need to be able to understand the limitations of their human driver, and vice versa, says Marieke Martens, a professor in automated vehicles and human interaction from the Eindhoven University of Technology in the Netherlands. In other words, the human driver needs to be ready to take control of the car in certain situations, such as dealing with roadworks, while the car also needs to be able to monitor the capacity of the human in the car.

 

‘We (need) systems that can predict and understand what people can do,’ she said, adding that under certain conditions these systems could decide when it’s better to take control or alert the driver. For example, if the driver is fatigued or not paying attention, she says, then the car ‘should notice and take proper actions’, such as telling the driver to pay attention or explaining that action needs to be taken.

 

Prof. Martens added that rather than just a screen telling the driver automated features have been activated, better interfaces known as HMIs (human machine interaction) will need to be developed to talk between the driver and the car ‘so that the person really understands what the car can and cannot do, and the car really understands what the person can and cannot do.’

 

Protecting Privacy

When we talk about CAVs, we often discuss how they share information with other road users. But, notes Professor Sandra Wachter from the University of Oxford, UK, an Associate Professor in the law and ethics of AI, data, and robotics, that raises the significant issue of data protection. ‘That’s not really the fault of anybody, it’s just the technology needs that type of data,’ she said, adding that we need to take the privacy risks seriously.

 

That includes sharing location data and other information that could reveal a lot about a person when one car talks to another. ‘It could be things like sexual orientation, ethnicity, health status,’ said Prof. Wachter, with things like ethnicity being possible to glean from a postcode for example. ‘Basically anything about your life can be inferred from those types of data.

 

Solutions include making sure CAVs comply with existing legal frameworks in other areas, such as the General Data Protection Regulation (GDPR) in Europe, and deleting data when it is no longer needed. But further safeguards might be needed to deal with privacy concerns caused by CAVs. ‘Those things are very important,’ said Prof. Wachter.

 

Rigorous testing

Much of the self-driving car testing that has happened so far has been in relatively easy environments, says Dr John Danaher from the National University of Ireland, Galway, a lecturer in law who focuses on the implications of new technologies. In order to prove they can be safer than human-driven cars, we will need to show they can handle more taxing situations.

 

‘There are some questions about whether they are genuinely safe,’ he said. ‘You need to do more testing to actually ascertain their true risk potential, and you also need to test them in more diverse environments, which is something that hasn’t really been done (to a sufficient degree).

‘They tend to be tested in relatively controlled environments like motorways or highways, which are relatively more predictable and less accident-prone than driving on wet and windy country roads. The jury is still out on whether they are going to be less harmful, but that is certainly the marketing pitch.’

 

Waymo catalogs the mistakes its cars make on public roads, then recreates the trickiest situations at Castle, its secret “structured testing” facility in California’s Central Valley. The company also has a virtual driving environment, Carcraft, in which engineers can run through thousands of scenarios to generate improvements in their driving software

 

References and resources also include:

http://www.techrepublic.com/article/autonomous-driving-levels-0-to-5-understanding-the-differences/

https://spectrum.ieee.org/cars-that-think/transportation/self-driving/how-driveai-is-mastering-autonomous-driving-with-deep-learning

https://spectrum.ieee.org/cars-that-think/transportation/self-driving/driveai-brings-deep-learning-to-selfdriving-cars

http://www.asiaone.com/business/virtual-reality-drones-and-startups-among-highlights-hktdc-hong-kong-electronics-fair

https://www.technologyreview.com/s/612754/self-driving-cars-take-the-wheel/

https://placesjournal.org/article/mappings-intelligent-agents/?gclid=Cj0KCQiAvbiBBhD-ARIsAGM48bxGrzFk4vZoXK2MsfJRnSnpo4Qz4hHMAjbQ-Y6-aCk6lDwDGFr7RWoaAl1tEALw_wcB&cn-reloaded=1

 

Cite This Article

 
International Defense Security & Technology (October 5, 2022) Fully Autonomous technologies driving Global driverless car revolution. Retrieved from https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/.
"Fully Autonomous technologies driving Global driverless car revolution." International Defense Security & Technology - October 5, 2022, https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/
International Defense Security & Technology September 19, 2022 Fully Autonomous technologies driving Global driverless car revolution., viewed October 5, 2022,<https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/>
International Defense Security & Technology - Fully Autonomous technologies driving Global driverless car revolution. [Internet]. [Accessed October 5, 2022]. Available from: https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/
"Fully Autonomous technologies driving Global driverless car revolution." International Defense Security & Technology - Accessed October 5, 2022. https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/
"Fully Autonomous technologies driving Global driverless car revolution." International Defense Security & Technology [Online]. Available: https://idstch.com/technology/ict/fully-autonomous-technologies-driving-global-driverless-car-revolution/. [Accessed: October 5, 2022]

About Rajesh Uppal

Check Also

DARPA In the Moment (ITM) developing algorithm-driven decision making for military operations

Military operations – from combat, to medical triage, to disaster relief – require complex and …

error: Content is protected !!