ieeexplore.ieee.org/abstract/document/8248668
Preview meta tags from the ieeexplore.ieee.org website.
Linked Hostnames
2Thumbnail

Search Engine Appearance
Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction
Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.
Bing
Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction
Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.
DuckDuckGo
Automated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction
Automated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.
General Meta Tags
12- titleAutomated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction | IEEE Journals & Magazine | IEEE Xplore
- google-site-verificationqibYCgIKpiVF_VVjPYutgStwKn-0-KBB6Gw4Fc57FZg
- DescriptionAutomated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and
- Content-Typetext/html; charset=utf-8
- viewportwidth=device-width, initial-scale=1.0
Open Graph Meta Tags
3- og:imagehttps://ieeexplore.ieee.org/assets/img/ieee_logo_smedia_200X200.png
- og:titleAutomated Driving in Uncertain Environments: Planning With Interaction and Uncertain Maneuver Prediction
- og:descriptionAutomated driving requires decision making in dynamic and uncertain environments. The uncertainty from the prediction originates from the noisy sensor data and from the fact that the intention of human drivers cannot be directly measured. This problem is formulated as a partially observable Markov decision process (POMDP) with the intended route of the other vehicles as hidden variables. The solution of the POMDP is a policy determining the optimal acceleration of the ego vehicle along a preplanned path. Therefore, the policy is optimized for the most likely future scenarios resulting from an interactive, probabilistic motion model for the other vehicles. Considering possible future measurements of the surrounding cars allows the autonomous car to incorporate the estimated change in future prediction accuracy in the optimal policy. A compact representation results in a low-dimensional state-space. Thus, the problem can be solved online for varying road layouts and number of vehicles. This is done with a point-based solver in an anytime fashion on a continuous state-space. Our evaluation is threefold: At first, the convergence of the algorithm is evaluated and it is shown how the convergence can be improved with an additional search heuristic. Second, we show various planning scenarios to demonstrate how the introduction of different considered uncertainties results in more conservative planning. At the end, we show online simulations for the crossing of complex (unsignalized) intersections. We can demonstrate that our approach performs nearly as good as with full prior information about the intentions of the other vehicles and clearly outperforms reactive approaches.
Twitter Meta Tags
1- twitter:cardsummary
Link Tags
9- canonicalhttps://ieeexplore.ieee.org/abstract/document/8248668
- icon/assets/img/favicon.ico
- stylesheethttps://ieeexplore.ieee.org/assets/css/osano-cookie-consent-xplore.css
- stylesheet/assets/css/simplePassMeter.min.css?cv=20250812_00000
- stylesheet/assets/dist/ng-new/styles.css?cv=20250812_00000
Links
17- http://www.ieee.org/about/help/security_privacy.html
- http://www.ieee.org/web/aboutus/whatis/policies/p9-26.html
- https://ieeexplore.ieee.org/Xplorehelp
- https://ieeexplore.ieee.org/Xplorehelp/overview-of-ieee-xplore/about-ieee-xplore
- https://ieeexplore.ieee.org/Xplorehelp/overview-of-ieee-xplore/accessibility-statement