Advertisement
Guest User

Untitled

a guest
Oct 9th, 2019
174
0
Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
text 9.00 KB | None | 0 0
  1. \documentclass{article}
  2. \usepackage[margin=0.85in]{geometry}
  3. \usepackage{enumerate}
  4. \usepackage{tikz}
  5. \usepackage{hyperref}
  6. \usepackage{amsmath}
  7. \usepackage{graphicx}
  8. \usepackage{enumitem}
  9. \usepackage{listings}
  10. \usepackage{amssymb}
  11. \usepackage[style=numeric]{biblatex}
  12. \usepackage{url}
  13. \usepackage{xcolor}
  14. \usepackage{todonotes}
  15.  
  16. \definecolor{dkgreen}{rgb}{0,0.6,0}
  17. \definecolor{gray}{rgb}{0.5,0.5,0.5}
  18. \definecolor{mauve}{rgb}{0.58,0,0.82}
  19.  
  20. % Please address the following questions:
  21. % What are some impacts of the proposed research?
  22. % What is novel about the approach you are taking?
  23. % What methods from class does it use?
  24. % What is your metric for success?
  25. % What are the key technical issues you will have to confront?
  26. % What software or datasets will you use?
  27. % What is your timeline?
  28.  
  29. \lstset{frame=tb,
  30. language=Matlab,
  31. aboveskip=3mm,
  32. belowskip=3mm,
  33. showstringspaces=false,
  34. columns=flexible,
  35. basicstyle={\small\ttfamily},
  36. numbers=none,
  37. numberstyle=\tiny\color{gray},
  38. keywordstyle=\color{blue},
  39. commentstyle=\color{dkgreen},
  40. stringstyle=\color{mauve},
  41. breaklines=true,
  42. breakatwhitespace=true,
  43. tabsize=3
  44. }
  45.  
  46. \addbibresource{references.bib}
  47. \graphicspath{ {./} }
  48.  
  49. \begin{document}
  50.  
  51. \author{William Qi, Sourish Ghosh, Yaadhav Raaj, John Mai \\ wq, sourishg, ryaadhav, johnmai}
  52. \title{Proposal: Evaluating Multi-Agent SLAM for Autonomous Vehicles}
  53. \maketitle
  54.  
  55. \section{Introduction}
  56. On the road to widespread deployment of level 4 autonomous vehicles on city streets, offline mapping of previously unseen environments has been a key first step in the expansion of operational capabilities into new areas. This can often be a lengthy and expensive process, requiring a sensor-laden vehicle to follow a fixed route, while collecting data to use with SLAM algorithms along the way. In order to build accurate, high-resolution maps in dynamic urban environments, it is also often necessary to revisit a point multiple times, collecting map information from multiple views to improve robustness. Furthermore, for various reasons, the optimal time window for mapping can often be narrow, perhaps limited to a period when few third-party actors are present on the roads.\\
  57.  
  58. \noindent
  59. To improve the efficiency of this process, one promising approach could be to concurrently deploy multiple sensor-laden vehicles within a single area, each following a different pre-planned path, possibly with some overlapping views at intermediate time steps. By employing multiple mapping vehicles and performing multi-agent visual SLAM, it is possible to merge data from multiple viewpoints across the temporal domain, allowing for the construction of robust maps without the need for time-intensive sequential mapping runs.\\
  60.  
  61. \noindent
  62. In our project, we would like to thoroughly evaluate and compare the effectiveness of CoSLAM and MultiCol-SLAM in a simulated multi-agent autonomous driving setting, using visual data obtained from a roof-mounted sensor suite. We also plan to evaluate each approach's sensitivity to various environmental configurations by varying the number of mapping agents, amount of sensor and localization noise, as well as the density of third-party dynamic actors present in the scene. Finally, if time permits, we would like to exploit lessons learned from our evaluation to improve upon the baseline SLAM approaches in the hopes of achieving better performance.
  63.  
  64. \section{Related Work}
  65.  
  66. In this project, we plan to focus primarily on SLAM methods that are capable of running in real-time and do not require offline data collection. We also choose to focus on methods that do not require dense depth information, as LIDAR sensors can be prohibitively expensive and IR-based alternatives can be unreliable outdoors, due to over-saturation from sunlight.
  67.  
  68. \subsection{Visual SLAM}
  69.  
  70. Visual SLAM methods can be split into two broad categories: sparse keypoint-based approaches, which attempt to match feature descriptors across frames, and dense approaches, which operate directly on predicted depth maps, edges, or surface normals to perform SLAM. Sparse methods include ORB SLAM \cite{orbslam} and ORB SLAM 2 \cite{orbslam2} (which improves upon the original with the incorporation of loop closure). Dense methods include LSD-SLAM \cite{lsdslam} and DSO-SLAM \cite{dsoslam},which make use of dense features such as pixel-wise intensity/gradients. Extensions of these methods have also been employed with stereo cameras in the self-driving domain \cite{dsocar}.
  71.  
  72. % \textbf{TODO:} Summarize 2 or 3 of the best visual SLAM approaches that are being employed in AV settings today, make sure one of them is the approach that we evaluate. Talk about the differences and trade-offs between them.
  73.  
  74. \subsection{Multi-Agent SLAM}
  75. %\textbf{TODO:} Summarize 2 or 3 of the best multi-agent SLAM approaches that are being used today, make sure one of them is the approach that we evaluate.
  76.  
  77. Sparse keypoint multi-agent SLAM methods include: 1) \cite{multicolslam}, which extends ORB SLAM 2 to work in a rigid multi-camera setting, with evaluation comparing performance against a highly competitive set of baselines, and 2) CoSLAM \cite{coslam}, which relaxes constraints to allow for independently-moving cameras within the same environment.
  78.  
  79. \section{Proposed Approach}
  80. %\textbf{TODO:} Talk about the environment in which we plan to evaluate our multi-agent SLAM approach.
  81.  
  82. We aim to evaluate a setting where multiple vehicles are driving along a road, and all of them help each other to reinforce their position and update the global map. We will be planning to use the CARLA simulation environment \cite{Dosovitskiy17} due to it's ease of setting a scenario where multiple cars are driving with overlapping views, realistic renderer based on Unreal Engine 4 and precise knowledge of the camera parameters and transformations. There currently exists no known dataset where multiple cars are collaborating to build a map. We will also be using the KITTI dataset for evaluation. We plan to simulate a situation where two are three cars are driving along a stretch of road adjacent to each other, and to see the performance of mapping and localization given this scenario.\\
  83.  
  84. \noindent
  85. %\textbf{TODO:} Explain the key technical issues that multi-agent SLAM in an autonomous driving setting has to confront.
  86. Key technical issues for a multi-agent SLAM system in an autonomous vehicle setting include:
  87. \begin{itemize}
  88. \item Large-scale environments: In an autonomous driving setting, the environment spanned by a single agent is much larger than that encountered in an indoor setting.
  89. \item Dynamic environments: While driving in an urban environment, the agent will likely encounter dynamic features; pedestrians, other vehicles, gates, etc. Mapping in this type of environment will require filtering and tracking of these dynamic objects.
  90. \item Semantic representations: In order for the agent to successfully navigate an urban environment, some degree of understanding of the environment is needed; the agent will need to map their environment using semantic concepts rather than unlabelled features or point clouds.
  91. \end{itemize}
  92.  
  93. \noindent
  94. %\textbf{TODO:} Explain the experiments we plan to run to evaluate how well multi-agent SLAM works in various configuration of the setting.
  95.  
  96. \noindent
  97. We plan to evaluate the methods in two configuration worlds: (1) static, and (2) dynamic. The static world will only contain static objects and autonomous vehicles will be the only dynamic entities. The dynamic world will contain other moving objects such as other cars, pedestrians, etc.
  98.  
  99. \noindent
  100. %\textbf{TODO:} Explain the metrics that we'll use to quantify SLAM performance.
  101. We will evaluate the algorithms based on the following metrics:
  102. \begin{itemize}
  103. \item Qualitative analysis (quality of map and trajectory) on long challenging trajectories with multiple loop closures.
  104. \item Relative pose error (RPE) which measures the local accuracy of a trajectory over a fixed time interval.
  105. \item Absolute trajectory error (ATE) which compares the absolute distances between the estimated and the ground truth trajectory.
  106. \end{itemize}
  107.  
  108. \noindent
  109. %\textbf{TODO:} Explain how we plan to make improvements to the approach to achieve better performance, if time permits.
  110.  
  111. \noindent
  112. If time permits, we plan to extend the best-performing method to incorporate stereo inputs in place of monocular vision, with the hope of achieving better performance.
  113.  
  114. \section{Timeline}
  115. In the pre-midpoint portion of the project, we plan to first set up a simulation environment in Carla and collect sensor information an urban, multi-agent settings. Using this environment, we then plan to construct separate datasets for parameter tuning and evaluation. Post-midpoint, we plan to implement CoSLAM and MultiCol-SLAM, evaluating performance in a variety of environmental configurations and settings, with a summary of performance (measured with RPE, ATE, and qualitatively) provided in our final report.
  116.  
  117. \printbibliography
  118. \end{document}
Advertisement
Add Comment
Please, Sign In to add comment
Advertisement