February 27, 2021

Event-Based Observatory Enables Star Gazing During the Day

4 min read
Author’s note: This is the first of a short series of columns focusing on recent...

Author’s note: This is the first of a short series of columns focusing on recent winners of the Misha Mahowald Prize for Neuromorphic Engineering. Mahowald is one of the inventers of the address-event representation, and for whom the Neuromorphic Engineering Prize is named.

********************

Brains and Machines

Over the last 30 years, I’ve noticed a trend that has really fascinated me: the relationship between artificial intelligence and media. Since media devices have such a huge market, the cheap sensors that we’ve used in robotics, automated inspection, and all sorts of other applications have tended to be those originally developed to relay sights and sounds to human beings.

That would be fine if the criteria for what made good sensors were the same in both fields, but they’re not. For media sensors, you need to collect information that will create a pleasing image (or sound) for a human being, no more or less. Because of this, the resolution, frame rate, color spectrum, are all geared around the human visual system. Even deeper than that, the concept of a frame — a high-resolution snapshot of the world taken at a particular moment — has more to do with art than how we see the world functionally.

But we are starting to figure out that if we want machines to be able to see and understand a changing world, we need to create sensors that are not optimized for human biology, but for the information we want the machine to suck out of the environment and process. In truth, real human vision only involves seeing tiny sections of the world at high resolution as we focus on them, and we see those in series, building up a clearer picture of our environment over time. This more pragmatic approach to acquiring information also has advantages for automated systems.

Space observation, but not as we know it
Work done over the last few years by Gregory Cohen, associate professor in neuromorphic systems (algorithms) and his team at Western Sydney University, is a good example of how this change of approach can be beneficial. They have been working with the Australian Department of Defense to try to deal with the problem of detecting and tracking both the working satellites and space junk in both low and geostationary Earth orbit.

They’ve created a new telescope system (Astrosite) that uses neuromorphic, event-based cameras that are very different from frame-based cameras we are used to.

For one thing, they don’t take snapshots in time. Instead, they only send information when the intensity of the pixel they are looking at changes. This means that pixels staring at still objects have nothing to say. Because they’re not sending information, that information doesn’t need to be processed, which means machine processing the images has less work to do (and the link between it and the sensor can also be lower in bandwidth). Because the sensors also have a very high dynamic range, there is little functional difference between night and day. The same changes in brightness caused by stars or satellites at night are still detected in daylight.

Of course, what has been tricky for the whole neuromorphic community is taking these great sensors (which have been around for many years) into functional, useful systems. Cohen’s team adapted some work done on using event-based cameras for simultaneous localization and mapping applications (SLAM) in the UK and France [1]

The algorithms they’ve developed have allowed them to acquire detailed star fields by using the rotation of the earth as the change (see Figure 1) but, with the robot arm integrated into the system, they can actively target and track objects of interest.

Figure 1 Astrosite stripe: An early image generated by the event-based camera (dark stripe), compared with a star map created using the ground-truth conventional (CCD) camera that is also incorporated into the Astrosite system (stripe outlined in white). (Image: Saeed Afshar and Greg Cohen)

Observatory in a box
The Astrosite itself is impressively self-sufficient. The system is housed in a shipping container for easy transportation, includes its own monitoring station, and the telescopes can be exposed for observation or enclosed for protection as required. For students studying at the university in these Covid times, Prof Cohen and his staff created one of the most compelling virtual lab visits I’ve ever seen. If you’re interested in the project, I highly recommend you take the tour. The work was done at the International Centre for Neuromorphic Systems (ICNS), a multi-disciplinary research lab, so you can get a look at some of that too.

This work first came to my attention at the Telluride Conference in 2020 when Prof. Cohen gave a presentation as the 2019 recipient of the Misha Mahowald Prize for Neuromorphic Engineering. Mahowald was one of the inventors of the address-event representation/protocol. This allows artificial neurons to broadcast spikes (events) at high speed and with timing more-or-less preserved across a large network, but without the 1:1 communication links that the brain uses. She died before I really got into the field, but continues to be a presence in neuromorphic engineering to this day.

“Brains and Machines” columnist  Sunny Bains teaches at University College London, is author of Explaining the Future: How to Research, Analyze, and Report on Emerging Technologies, and is currently writing a book on neuromorphic engineering.

References
[1]        H. Kim, A. Handa, R. Benosman, S. H. Ieng, and A. J. Davison, “Simultaneous mosaicing and tracking with an event camera,” BMVC 2014 – Proc. Br. Mach. Vis. Conf. 2014, pp. 1–12, 2014, doi: 10.5244/c.28.26.

The post Event-Based Observatory Enables Star Gazing During the Day appeared first on EETimes.

Source link

Copyright © All rights reserved. | Newsphere by AF themes.