3D-Tiefenwahrnehmung
in der Wissenschaft

Nerians 3D-Tiefenwahrnehmung in der Wissenschaft

Unsere Stereovision-Technologie wurde bereits in zahlreichen Forschungsprojekten eingesetzt. Nachfolgend finden Sie eine Liste der wissenschaftlichen Publikationen von uns und unseren Kunden, bei denen unsere Technologie zur Anwendung gekommen ist.

Selbstverständlich sind wir jederzeit daran Interessiert bei Forschungsprojekten nach Möglichkeit mitzuwirken oder diese zu unterstützen.

GreenTeam Universität Stuttgart

Derzeit unterstützen wir das GreenTeam der Universität Stuttgart im Bereich autonomes Fahren.

Elefant Racing e.V. der Universität Bayreuth

Wir sind derzeit Titan-Sponsor des Elefant Racing Teams der Universität Bayreuth, Deutschland.

MIT & Delft Driverless

Nerian unterstützte das gemeinsame Team der TU Delft und des MIT beim Formula Student Wettbewerb 2019. Das Team erreichte einen hervorragenden 3. Platz von insgesamt 20 Teilnehmern.

Publikationen von Nerian

Real-Time Stereo Vision on FPGAs with SceneScan

We present a flexible FPGA stereo vision implementation that is capable of processing up to 100 frames per second or image resolutions up to 3.4 megapixels, while consuming only 8 W of power. The implementation uses a variation of the Semi-Global Matching (SGM) algorithm, which provides superior results compared to many simpler approaches. The stereo matching results are improved significantly through a post-processing chain that operates on the computed cost cube and the disparity map. With this implementation we have created two stand-alone hardware systems for stereo vision, called SceneScan and SceneScan Pro. Both systems have been developed to market maturity and are available from Nerian Vision GmbH.

Publiziert bei Forum Bildverarbeitung 2018, pp. 339–350.

SP1: Stereo Vision in Real Time

Stereo vision is a compelling technology for depth perception. Unlike other methods for depth sensing, such as time-of-flight or structured light cameras, stereo vision is a passive approach. This makes this method suitable for environments with bright ambient lighting, or for situations with multiple sensors within close proximity to one another. The reason why stereo vision is not used more widely is that it requires a vast amount of computation. To overcome this burden, Nerian Vision Technologies introduces the SP1 stereo vision system. This stand-alone device is able to handle the required processing by relying on a built-in FPGA.

Publiziert bei MuSRobS@ IROS, 2015, pp. 40–41.

Publikationen von Kunden mit Bezug auf Scarlet

  • Zürn, M., Wnuk, M., Lechler, A., & Verl, A. (2023). Topology Matching of Branched Deformable Linear Objects. In 2023 IEEE International Conference on Robotics and Automation (ICRA) (pp. 7097-7103). [Link]
  • Wnuk, M., Zürn, M., Paukner, M., Ulbrich, S., Lechler, A., & Verl, A. (2022). Case Study on Localization for Robotic Wire Harness Installation. In Stuttgart Conference on Automotive Production (pp. 333-343). Cham: Springer International Publishing. [Link]
  • Zürn, M., Wnuk, M., Lechler, A., & Verl, A. (2022). Software Architecture For Deformable Linear Object Manipulation: A Shape Manipulation Case Study. In 2022 IEEE/ACM 4th International Workshop on Robotics Software Engineering (RoSE) (pp. 9-16). [Link]

Publikationen von Kunden mit Bezug auf SceneScan

  • Sadeghi, R., Kartha, A., Barry, M. P., Gibson, P., Caspi, A., Roy, A., … & Dagnelie, G. (2024). Benefits of thermal and distance-filtered imaging for wayfinding with prosthetic vision. Scientific Reports, 14(1), 1313. [Link]
  • Lee, C., Schätzle, S., Lang, S. A., & Oksanen, T. (2023). Design considerations of a perception system in functional safety operated and highly automated mobile machines. Smart Agricultural Technology, 6, 100346. [Link]
  • Sugiura, R., Nakano, R., Shibuya, K., Nishisue, K., & Fukuda, S. (2023). Real-Time 3D Tracking of Flying Moths Using Stereo Vision for Laser Pest Control. In 2023 ASABE Annual International Meeting. [Link]
  • Yildiz, E., Renaudo, E., Hollenstein, J., Piater, J., & Wörgötter, F. (2022). An Extended Visual Intelligence Scheme for Disassembly in Automated Recycling Routines. In International Conference on Robotics, Computer Vision and Intelligent Systems (pp. 25-50). [Link]
  • Forkan, A. R. M., Kang, Y. B., Marti, F., Joachim, S., Banerjee, A., Milovac, J. K., … & Georgakopoulos, D. (2022). Mobile IoT-RoadBot: an AI-powered mobile IoT solution for real-time roadside asset management. In Proceedings of the 28th Annual International Conference on Mobile Computing And Networking (pp. 883-885). [Link]
  • Bobkov, V. A., Kudryashov, A. P., & Inzartsev, A. V. (2022). Object Recognition and Coordinate Referencing of an Autonomous Underwater Vehicle to Objects via Video Stream. Programming and Computer Software, 48(5), 301-311. [Link]
  • Bobkov V.A., Morozov M.A., Shupikova A.A., Inzartcev A.V. (2021). Underwater Pipeline Recognition using Autonomous Underwater Vehicle by Stereo Images in the Task of Inspection of Underwater Objects. Underwater investigation and robotics No. 3(37) (pp. 36-45). [Link]
  • Wnuk, M., Hinze, C., Zürn, M., Pan, Q., Lechler, A., & Verl, A. (2021, November). Tracking Branched Deformable Linear Objects With Structure Preserved Registration by Branch-wise Probability Modification. In 2021 27th International Conference on Mechatronics and Machine Vision in Practice (M2VIP) (pp. 101-108). IEEE. [Link]
  • Chang, W. C., Lin, Y. K., & Pham, V. T. (2021). Vision-Based Flexible and Precise Automated Assembly with 3D Point Clouds. In 9th International Conference on Control, Mechatronics and Automation (ICCMA) (pp. 218-223). IEEE. [Link]
  • Bobkov, V., Kudryashov, A., & Inzartsev, A. (2021). Method for the Coordination of Referencing of Autonomous Underwater Vehicles to Man-Made Objects Using Stereo Images. Journal of Marine Science and Engineering, 9(9), 1038. [Link]
  • Nardy, L., Pinheiro, O., Lepikson, H. (2021). Computer System Integrated with Digital Models for Reconstruction of Underwater Structures with High Definition. IEEE Latin America Transactions [Link]
  • Omrani, E., Mousazadeh, H., Omid, M., Masouleh, M. T., Jafarbiglu, H., Salmani-Zakaria, Y., … & Kiapei, A. (2020). Dynamic and static object detection and tracking in an autonomous surface vehicle. Ships and Offshore Structures, 15(7), 711-721. [Link]
  • Jiang, P., Osteen, P., Wigness, M. & Saripalli, S. (2020). RELLIS-3D Dataset: Data, Benchmarks and Analysis. arXiv:2011.12954 [cs.CV]. [Link]
  • Yildiz, E., Brinker, T., Renaudo, E., Hollenstein, J. J., Haller-Seeber, S., Piater, J., & Wörgötter, F. A. (2020). Visual Intelligence Scheme for Hard Drive Disassembly in Automated Recycling Routines. In Proceedings of the International Conference on Robotics, Computer Vision and Intelligent Systems (ROBOVIS 2020), p. 17-27. [Link]
  • Hinze, C., Zürn, M., Wnuk, M., Lechler, A. & Verl, A. (2020). Nonlinear Trajectory Control for Deformable Linear Objects based on Physics Simulation. In the 46th Annual Conference of the IEEE Industrial Electronics Society (IECON), p.310-316. [Link]
  • Strobel, K., Zhu, S., Chang, R. & Koppula, S. (2019). Accurate, Low-Latency Visual Perception for Autonomous Racing: Challenges, Mechanisms, and Practical Solutions. Technical report. [Link]
  • Hinze, C., Wnuk, M. & Lechler, A. (2019). Harte Echtzeit für weiche Materialien. atp magazin, v. 61, n. 11-12, p. 112-119. [Link]
  • Vrba, M., Heřt, D. & Saska, M. (2019). Onboard Marker-Less Detection and Localization of Non-Cooperating Drones for Their Safe Interception by an Autonomous Aerial System. IEEE Robotics and Automation Letters, 4(4), 3402-3409. IEEE. [Link]

Publikationen von Kunden mit Bezug auf Stereovision IP-Core

  • Junger, C., Fütterer, R., Rosenberger, M., & Notni, G. (2022). FPGA-based multi-view stereo system with flexible measurement setup. Measurement: Sensors, 24, 100425. [Link]
  • Gibson, P. L., Hedin, D. S., Seifert, G. J., Rydberg, N., Skujiņš, J. & Boldenow, P. (2022). Stereoscopic Distance Filtering Plus Thermal Imaging Glasses Design. International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC). [Link]
  • Kartha, A., Sadeghi, R., Barry, M. P., Bradley, C., Gibson, P., Caspi, A., Roy, A. & Dagnelie, G. (2020). Prosthetic Visual Performance Using a Disparity-Based Distance-Filtering System. Translational Vision Science & Technology, 9(12), 27-27. [Link]
  • Fütterer, R., Schellhorn, M., & Notni, G. (2019). Implementation of a multiview passive-stereo-imaging system with SoC technology. In Photonics and Education in Measurement Science 2019 (Vol. 11144, p. 111440Q). International Society for Optics and Photonics. [Link]

Publikationen von Kunden mit Bezug auf SP1

  • Erz, M. (2018). Computer vision based pose detection of agricultural implements without a priori knowledge of their geometry and visual appearance. In 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (pp. 1-6). IEEE. [Link]
  • Buck, S., & Zell, A. (2019). CS:: APEX: A Framework for Algorithm Prototyping and Experimentation with Robotic Systems. Journal of Intelligent & Robotic Systems, 94(2), 371-387. [Link]
  • Hanten, R., Kuhlmann, P., Otte S. & Zell, A. (2018). Robust Real-Time 3D Person Detection for Indoor and Outdoor Applications. In 2018 IEEE International Conference on Robotics and Automation (ICRA) (pp. 2000-2006). IEEE. [Link]
  • Dubey, G., Madaan, R., & Scherer, S. (2018). DROAN-Disparity-Space Representation for Obstacle Avoidance: Enabling Wire Mapping & Avoidance. In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 6311-6318). IEEE. [Link] [PDF]
  • Zilly, J., Buhmann, J. M., & Mahapatra, D. (2017). Glaucoma detection using entropy sampling and ensemble learning for automatic optic cup and disc segmentation. Computerized Medical Imaging and Graphics, 55, 28-41. [Link] [PDF]

Neues Stereovision Projekt?
Wir evaluieren es mit Ihnen!

Nichts mehr verpassen

Wenn Sie Neuigkeiten zu unseren Stereovision-Produkten erhalten möchten, dann abonnieren Sie unseren Newsletter