Improve model card: add metadata (tags, license) and links
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
**VLN-PE Benchmark**
|
| 2 |
<style type="text/css">
|
| 3 |
.tg {border-collapse:collapse;border-spacing:0;}
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: robotics
|
| 4 |
+
library_name: transformers
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# VLN-PE Benchmark Models
|
| 8 |
+
|
| 9 |
+
This repository hosts models and results for the [Rethinking the Embodied Gap in Vision-and-Language Navigation: A Holistic Study of Physical and Visual Disparities](https://huggingface.co/papers/2507.13019) benchmark.
|
| 10 |
+
|
| 11 |
+
VLN-PE is a physically realistic Vision-and-Language Navigation (VLN) platform supporting humanoid, quadruped, and wheeled robots. It aims to bridge the gap between idealized assumptions and physical deployment challenges in VLN, systematically evaluating ego-centric VLN methods across different technical pipelines.
|
| 12 |
+
|
| 13 |
+
* **Project Page**: https://crystalsixone.github.io/vln_pe.github.io/
|
| 14 |
+
* **Code Repository**: https://github.com/InternRobotics/InternNav
|
| 15 |
+
|
| 16 |
+
## Benchmark Results
|
| 17 |
+
|
| 18 |
+
The following table presents the benchmark results for various models evaluated on the VLN-PE platform:
|
| 19 |
+
|
| 20 |
**VLN-PE Benchmark**
|
| 21 |
<style type="text/css">
|
| 22 |
.tg {border-collapse:collapse;border-spacing:0;}
|