As a kid, I dreamed about the flying cars and jetpacks that would hopefully define our future lives. Up until recently, the technology that could make them even remotely possible was both expensive and difficult to manufacture.
Now, thanks to 3-D printing and machine learning, engineers can quickly and cost effectively design, test and commercialize new designs, making the era of autonomous vehicles — yes, even flying cars — closer to reality than ever.
After years of combined academic and industry expertise in millimeter wave technology, I discovered a way to manufacture the Luneburg lens, a spherically symmetric gradient-index lens used to construct efficient microwave antennas and radar calibration standards. Today, I’ve figured out how to apply this 1940s-era technology to autonomous vehicles. Our sensor now surpasses phased array antennas for wireless communication and serves as the ultimate eye for autonomous cars with a 360-degree field of vision and optimal clarity, doubling human capacity.
What used to be next to impossible to create can now be done in minutes or hours. This has enabled us to commercialize the technology and launch our Tucson, Arizona-based startup, Lunewave.
What changed? The traditional manufacturing method to make the Luneberg lens is akin to building an onion from the inside out. It is tedious and prone to error. Manufacturing the lens manually was only possible up to 10 gigahertz (GHz) or so. However, using 3-D printing together with our design methodology, we can digitize the spherical lens into over 6,500 chambers, capable of operating at 77 GHz, which is required for automotive applications.
Aside from the ability to manufacture the lens quickly and accurately, we needed a way to address interference mitigation and object classification. To improve the sensor for autonomous driving so that we can guarantee safety, there is a large amount of data that must be trained. For example, how do we distinguish a box on the ground from a person?
All this requires a large amount of data processing, collection and analysis. Using IBM Watson Studio, we’re working together with IBM engineers to take advantage of their data-science experts so that we can process, calibrate and train the large amount of data and so that we can achieve real-time detection of multiple objects on the road.
My endeavor is just one example of how machine learning and 3-D printing is advancing the Internet of Things (IoT). For everything to be wirelessly connected, we’ll need more antennas, and for each object requiring an antenna, each will have different design requirements.
How do we do that more efficiently? One way to think about that is there are already thousands if not millions, of designs that existed previously. Perhaps none of them are exactly what you want, but they all share some commonality.
Around 90 percent of the time, you could be either reinventing your previous work or somebody else’s work. If we can use machine learning to completely automate those designs, we can save on expensive engineering labor costs and get the best design for each application. Bring in 3-D printing and your design degree of freedom is exponentially higher with a shorter life cycle for prototyping, shortening from an average of weeks to days.
3-D printing is encouraging our own innovation. Machine learning is enabling us to come up with the best designs for each application. For example, today, Lunewave’s radar sensors are being developed for autonomous driving scenarios, but thanks to the design degree of freedom we now possess, we can easily envision and now plan for other applications of the technology from unmanned aerial vehicles to robotics.
A few decades ago, I said that the future of flight is flying cars, but today I can say that thanks to 3-D printing and machine learning, the Lunewave sensor will help us get there.
Learn more about Watson Studio.
Hao Xin is the CTO and co-founder of Lunewave and a Professor of Electrical and Computer Engineering at the University of Arizona. Find him on LinkedIn.