3d Pose Estimation Github

  • submit to reddit
Finally, the 3D pose of each person is reconstructed from the corresponding bounding boxes and associated 2D poses (d). interaction_network_pytorch: Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics. How does LIDAR measurement look like. January 10, 2000. , the object pose expressed in the camera frame) when a calibrated camera is used. Yu Xiang's homepage Biography. Challenges on Multi-Human Pose Estimation and Fine-Grained Multi-Human Parsing with CVPR 2018. Owens and A. pose estimate. 3d path of surface point X Y Z x y A 3D point follows a space-time path. NoisyNaturalGradient: Pytorch Implementation of paper "Noisy Natural Gradient as Variational Inference". Hey! I have a setup with a bottom camera (simple webcam pointed at the ground). pdf), Text File (. I first reproduced prior work in ICCV 2017using fully-connected neural nets to learn 2D-to-3D pose regression. Published in T-PAMI, 2018. - andyzeng/3dmatch-toolbox. other tasks) is allowed. The most general version of the problem requires estimating the six degrees of freedom of the pose and five calibration. Example data is provided to give an indication of what the toolbox can be used for - including how to simulate the effect of different motion profiles on an acquisition. For frame-level pose estimation we experiment with Mask R-CNN, as well as our own proposed 3D extension of this model, which leverages temporal information over small clips to generate more robust frame predictions. Preprint, arXiv, 2019. Estimating 3D pose of a known object from a given 2D image is an important problem with numerous studies for robotics and augmented reality applications. (selected for oral presentation) [Paper (arXiv)] [Ext. Yasutaka Furukawa has written a beautiful software package called PMVS2 for running dense multi-view stereo. Using the determined Rotation matrix and Translation vector, I would like to be able to calculate the pose of the second(R) camera, given the pose of the first(L) camera. The dataset contains 5,277 driving images and over 60K car instances, where each car is fitted with an industry-grade 3D CAD model with absolute model size and semantically labelled keypoints. 2D Human Pose Estimation vs 3D Human Pose Estimation. • Only the 3D pose ground-truth will be provided. m' to performe 3D Pose Estimation for each single image of the dataset. [Tateno2017]CNN-SLAM (2/3) Camera Pose Estimation 現フレームの画素を前キーフレーム上へ投影した時の差が最 小となるPoseを推定(Direct法) LSD-SLAM同様、輝度勾配の高い領域 投影時にCNNで推定した深度情報を使用 LSD-SLAMではKey-Frame間のステレオで深度推定 CNN Depth Prediction. VINS-Mobile uses sliding window optimization-based formulation for providing high-accuracy visual-inertial odometry with automatic initialization and failure recovery. I am a PhD student in the School of Interactive Computing at the Georgia Institute of Technology advised by James Rehg. generation for 6D object-pose estimation. 创作这篇文章的初衷就来源于Facebook研究所的DensePose,上周,Facebook公布了这一框架的代码、模型和数据集,同时发布了DensePose-COCO,这是一个为了估计人类姿态的大型真实数据集,其中包括了对5万张COCO图像手动标注的由图像到表面的对应。. Human Pose Estimation 101. homepage top 3D view snapshot view first frame view Paper J. • Use of pretrained features on ImageNet (i. This keeps the covariances for those values from exploding while ensuring that your robot’s state estimate remains affixed to the X-Y plane. Real-Time Monocular Pose Estimation of 3D Objects using Temporally Consistent Local Color Histograms Henning Tjaden, Ulrich Schwanecke RheinMain University of Applied Sciences Wiesbaden, Germany {henning. pose estimation from single images in a discretized viewpoint space, we show that the 3D aspect part representation can be utilized to estimate continuous object pose and 3D aspect part locations in multiview object tracking. f combines normal and lighting to produce shading. zip Download. Now, we assume that the term “3D pose estimation” refers to the localization of the hand or human body keypoints in 3D space. This is performed using a robust estimation strategy, called RANSAC. LP Molecular Viewer is an ActiveX/ATL control for embedding interactive 3D representations of molecular data in Microsoft products. Since there are multiple candidates with similar angle and distance, it is hard to distinguish them based on spatial estimation, which motivates us to leverage visual features. What do you need for pose estimation ? To calculate the 3D pose of an object in an image you need the following information. In PASCAL3D+, we augment the 12 rigid categories in the PASCAL VOC 2012 dataset [4] with 3D annotations. Non-central absolute pose: The non-central absolute pose problem consists of finding the pose of a viewpoint given a number of 2D-3D correspondences between bearing vectors in multiple camera frames and points in the world frame. PersonLab: Person Pose Estimation and Instance Segmentation George Papandreou, Tyler Zhu, Liang-Chieh Chen, Spyros Gidaris, Jonathan Tompson, Kevin Murphy ECCV 2018 A box-free bottom-up approach for the tasks of pose estimation and instance segmentation of people in multi-person images using an efficient single-shot model. Hybrid One-Shot 3D Hand Pose Estimation by Exploiting Uncertainties Georg Poier, Konstantinos Roditakis, Samuel Schulter, Damien Michel, Horst Bischof and Antonis A. ) can be found in the HANDS challenge website. We demonstrate the versatility of this framework by tracking various body parts in multiple. This performance will be compared against the EgoCap ResNet model and their local skeleton pose estimation, achieved us-ing an analysis-by-synthesis optimization which maximizes. I have used opencv to calibrate a stereo camera pair. I'm currently a research intern at Facebook AI Research with Georgia Gkioxari and Justin Johnson, and I was a Student Researcher at Google NYC in 2018 with Ameesh Makadia. IEEE International Conference on Computer Vision (ICCV), 2015. The use of templating here allows Ceres to call CostFunctor::operator(), with T=double when just the value of the residual is needed, and with a special type T=Jet when the Jacobians are needed. Hey! I have a setup with a bottom camera (simple webcam pointed at the ground). Using DeepLabCut for 3D markerless pose estimation across species and behaviors. com, le plus grand site d'emploi mondial. This is the code for this video on Youtube by Siraj Raval. European Conference on Computer Vision (ECCV), 2018. We propose a new dataset for 3D hand+object pose estimation from color images, together with a method for efficiently annotating this dataset, and a 3D pose prediction method based on this dataset. Research Article. Atapour Abarghouei, T. We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. We propose a simple deep learning baseline for 3d human pose estimation that outperforms the state of the art. As CNN based learning algorithm shows better performance on the classification issues, the rich labeled data could be more useful in the training stage. DESCRIPTION. The dataset includes around 25K images containing over 40K people with annotated body joints. British Machine Vision Conference (BMVC), 2015. For a full list of compatible devices, see here. The performance of relative spatial estimation is worse than visual features (even worse than methods without learning). In addition, we contribute a large scale video dataset for 6D object pose estimation named the YCB-Video dataset. For computational efficiency, the set of object hypotheses is clustered to obtain smaller candidate sets while still containing poses close to the true solutions. The script or the add-in is now installed in Autodesk Fusion 360. Bodo Rosenhahn Prof. Learning Basis Representation to Refine 3D Human Pose Estimations. This involves an end-to-end deep learning method to estimate pose in volumetric representation. zip Download. Even on a 1080Ti we couldn't get to even 30 fps. Select a subset of points from the above point cloud such that all the matches are mutually compatible. Example data is provided to give an indication of what the toolbox can be used for - including how to simulate the effect of different motion profiles on an acquisition. Martinez, et. Domain Transfer for 3D Pose Estimation. We present two novel solutions for multi-view 3D human pose estimation based on new learnable triangulation methods that combine 3D information from multiple 2D views. [36] take another approach by relying on weak 3D supervision in form of a relative 3D ordering of joints, similar. Ideally the approach requires roughly 100GBs of RAM to load 3D pose databases for the retrievel of K-NNs. Li-Chee-Ming, C. 3D Hand Shape and Pose Estimation from a Single RGB Image. Since the publication of the original paper, there has been tremendous advances in the field of Machine Learning and Deep Neural Networks. m' to perform 3D Pose Estimation onthe whole dataset once or call 'RUN_Iterated. We estimate a full 3D mesh and show that 2D joints alone carry a surprising amount of information about body shape. It provides pixel-perfect ground truth for scene understanding problems such as semantic segmentation, instance segmentation, and object detection, and also for geometric computer vision problems such as optical flow, depth estimation, camera pose estimation, and 3D reconstruction. interaction_network_pytorch: Pytorch Implementation of Interaction Networks for Learning about Objects, Relations and Physics. Demo of 3D mouse pose estimation shows input depth image and estimated pose in top and side views I am developing a realtime algorithm for full-body pose estimation of mouse using depth images. The camera pose consists of 6 degrees-of-freedom (DOF) which are made up of the rotation (roll, pitch, and yaw) and 3D translation of the camera with respect to the world. camera-pose-estimation. See also a follow-up project which includes all the above as well as mid-level facial details and occlusion handling: Extreme 3D face reconstruction Available also as a docker for easy install. com) dense pose -> 2d uv maps that can be placed on a 3d mesh. And each set has several models depending on the dataset they have been trained on (COCO or MPII). A similar project with 3D pose estimation and only a RGB camera is:. Full 3D estimation of human pose from a single image remains a challenging task despite many recent advances. The impact of using appearance features, poses, and their combinations are measured, and the different training/testing protocols are evaluated. This is the code for the paper. Probabilistic Pose Estimation Using a Bingham Distribution-Based Linear Filter R. We describe the first method to automatically estimate the 3D pose of the human body as well as its 3D shape from a single unconstrained image. Readily available, accurately labeled synthetic data has the potential to reduce the effort. January 09, 2000. This is performed using a robust estimation strategy, called RANSAC. Triangulation of multiple rays. Yu Xiang's homepage Biography. t the camera space. Greetings from Yuesong Xie(谢岳松)! I am a Connected and Automated Vehicle Research Engineer working on some cool stuffs! Please find the projects that I have worked on in the following sections, and feel free to let me know your thoughts!. Open source of our CVPR 2019 paper "3D Hand Shape and Pose Estimation from a Single RGB Image" Introduction. Specifically, for each category, we first download a set of CAD models from Google 3D Warehouse [1], which are selected in such a. Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections. 3d shape classification and retrieval, 3d shape descriptor; Learning Analysis-by-Synthesis for 6D Pose Estimation in RGB-D Images; Alexander Krull, Eric Brachmann, Frank Michel, Michael Ying Yang, Stefan Gumhold, Carsten Rother; 6d pose estimation; A Deep Visual Correspondence Embedding Model for Stereo Matching Costs [KITTI-submission]. See also a follow-up project which includes all the above as well as mid-level facial details and occlusion handling: Extreme 3D face reconstruction Available also as a docker for easy install. on 3D Body Scanning Technologies, p. We also provide baseline experiments on four tasks: region proposal generation, 2D object detection, joint 2D detection and 3D object pose estimation, and image-based 3D shape retrieval, which can serve as baselines for future research using our database. University of Illinois at Urbana-Champaign GPS-LiDAR Sensor Fusion Aided by 3D City Models for UAVs Akshay Shetty and Grace Xingxin Gao SCPNT, November 2017. 3D Structure from 2D Motion, Tony Jebara et al. Yu Xiang is a Senior Research Scientist at NVIDIA. Pose Optimization SLAM. Developers. Multi-view 6D Object Pose Estimation and Camera Motion Planning using RGBD Images, Proc. Yang, “3D Path Planning from a Single 2D Fluoroscopic Image for Robot Assisted Fenestrated Endovascular Aortic Repair”, arXiv preprint arXiv:1809. Estimating 3D human poses from 2D joint positions is an ill-posed problem, and is further complicated by the fact that the estimated 2D joints usually have errors to which most of the 3D pose estimators are sensitive. zip Download. ObjectNet3D: A Large Scale Database for 3D Object Recognition. AliceVision is a Photogrammetric Computer Vision framework for 3D pose of an image regarding an existing 3D reconstruction generated by Meshroom. Developers. arxiv tensorflow; Learning to Estimate Pose by Watching Videos. 2014----Joint Training of a Convolutional Network and a Graphical Model for Human Pose Estimation. of IEEE Virtual Reality (VR), Arles, France, 2015 (full paper, accept rate=13. Breckon), In Proc. The library contains algorithms for feature estimation, surface reconstruction, 3D registration, model fitting, and segmentation. This paper estimate 3D human poses from a single image. Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections. So let's begin with the body pose estimation model trained on MPII. domain_size_pooling=true. Deep face pose estimation. ) can be found in the HANDS challenge website. Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections. Hence, 3D hand pose estimation is an important cornerstone of many Human-Computer Interaction (HCI), Virtual Reality (VR), and Augmented Reality (AR) applications, such as robotic control or virtual object interaction. • Training on both train+val for a submission is allowed. lidar, SfM point cloud, or depth), estimate the 6 DoF camera pose of a query image. 2nd year PhD student. 2D keypoint detection is rel-atively easier than 3D localization and rotation estimation. Relative pose estimation. Increase number of matches / sparse 3D points¶ To increase the number of matches, you should use the more discriminative DSP-SIFT features instead of plain SIFT and also estimate the affine feature shape using the options: --SiftExtraction. D degree from Shandong University under the supervision of Dr. the pose from an image, keypoint-based methods adopt a two-stage pipeline: they first predict 2D keypoints of the ob-ject and then compute the pose through 2D-3D correspon-dences with a PnP algorithm. BB8 is a novel method for 3D object detection and pose estimation from color images only. Estimating 3D pose of a known object from a given 2D image is an important problem with numerous studies for robotics and augmented reality applications. Two point Clouds , will be obtained. It includes both utilization on hardware and software. The ARUCO Library has been developed by the Ava group of the Univeristy of Cordoba(Spain). 2D coordinates of a few points: You need the 2D (x,y) locations of a few points in the image. From here, we can essentially take the maximum activation locations for each keypoint layer, and then estimate the 3d car pose using OpenCV’s SolvePnP method. Both approaches present new and interesting directions for integrating pose into detection; however, in this work we focus on the pose estimation problem itself. I have used opencv to calibrate a stereo camera pair. com, le plus grand site d'emploi mondial. January 10, 2000. com OpenPoseで踊ってみた動画からポーズ推定。 OpenPoseがどんどんバージョンアップして3d pose estimationも試せるように. Human Pose Estimation 101. I am interested in computer vision and machine learning, especially HCI-related things. In contrast, this paper proposes a novel end-to-end trainable approach for scene flow estimation from unstructured 3D point clouds. Statistical body shape models are widely used in 3D pose estimation due to their low-dimensional parameters representation. However, it is difficult to avoid self-intersection between body parts accurately. Head pose estimation, face alignment. Pose Optimization SLAM. txt) or read online. Transfer Learning for 3D Pose Estimation. This page was generated by GitHub Pages. 后者多一项Depth信息,常用于3D人体姿态估计的研究。 2. Request PDF on ResearchGate | On Dec 1, 2015, Peiyi Li and others published 3D Hand Pose Estimation Using Randomized Decision Forest with Segmentation Index Points. Before joining NTHU, he was a postdoc in CSE@UW working with Steve Seitz and Ali Farhadi. Fetal pose estimation in MRI time series yields novel means of quantifying fetal movements in health and disease, and enables the learning of kinematic models that may enhance prospective mitigation of fetal motion artifacts during MRI acquisition. We show that this data can be used to train a recent state-of-the-art hand pose estimation method, leading to increased accuracy. [1] Hinterstoisser et al. Towards Viewpoint Invariant 3D Human Pose Estimation ECCV Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, Li Fei-Fei 2016 + Vision-Based Hand Hygiene Monitoring in Hospital NIPS Workshop Serena Yeung, Alexandre Alahi, Zelun Luo, Boya Peng, Albert Haque, Li Fei-Fei 2016 Experience + Graduate Teaching Assistant Stanford, CA. Idris, A Dependently Typed Functional Programming Language. The backend then incrementally computes the mean of the resulting Gaussian, which is in turn converted by the treemap driver into a map estimate. tf-pose-estimation github. We present a new technique to fully automate the segmentation of an organ from 3D ultrasound (3D-US) volumes, using the placenta as the target organ. 08563 - Download as PDF File (. Originally, we demonstrated the capabilities for trail tracking, reaching in mice and various Drosophila behaviors during egg-laying (see Mathis et al. Learning to Estimate 3D Human Pose and Shape from a Single Color Image Georgios Pavlakos Luyang Zhu Xiaowei Zhou Kostas Daniilidis. Set up python environment; pip install -r requirements. We present a real time framework for recovering the 3D joint angles and shape of the body from a single RGB image. The current 3D pose estimation is based on the OpenCV solvePnP function. Welcome! I am currently a graduate student at Stanford University, pursuing a Master's in Computer Science. 三、Single Person Pose estimation. We show that such system is capable of generating realistic renderings while being trained on videos annotated with 3D poses and foreground masks. We also introduce a novel loss function that enables PoseCNN to handle symmetric objects. The training data to our system consists solely of un-labeled image sequences capturing scene appearance from differ-ent viewpoints, where the poses of the images are not provided. 后者多一项Depth信息,常用于3D人体姿态估计的研究。 2. For denser points, Dr. Note: the advantage of RADAR is that it can estimate the object speed directly by Doppler effect. He is currently a lecturer with the Faculty of Information Engineering, China University of Geosciences, China. Dense Depth Estimation of a Complex Dynamic Scene without Explicit 3D Motion Estimation Authors: Suryansh Kumar, Ram Srivatsav Ghorakavi, Yuchao Dai, Hongdong Li. Request PDF on ResearchGate | On Dec 1, 2015, Peiyi Li and others published 3D Hand Pose Estimation Using Randomized Decision Forest with Segmentation Index Points. set between the final and predicted poses by minimizing the matching cost between the online point cloud and the 3D map. High-precision Depth Estimation with the 3D LiDAR and Stereo Fusion Kihong Park, Seungryong Kim, and Kwanghoon Sohn IEEE International Conference on Robotics and Automation (ICRA), May 2018 : DCTM: Discrete-Continuous Transformation Matching for Semantic Flow Seungryong Kim, Dongbo Min, Stephen Lin, and Kwanghoon Sohn. Transfer Learning for 3D Pose Estimation. Published in T-PAMI, 2018. Part of the workshop is the SIXD Challenge on 6D object pose estimation. The current lack of training data makes the 3D hand+object pose estimation very challenging. The input to these architectures will be the 18 joint location estimates in 2D and the output will be 18 joint pose estimates in 3D. Part of his Ph. Luckily, building an extrinsic camera matrix this way is easy: just build a rigid transformation matrix that describes the camera's pose and then take it's inverse. Pose Optimization SLAM. Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image, CVPR 2017. Yang, “3D Path Planning from a Single 2D Fluoroscopic Image for Robot Assisted Fenestrated Endovascular Aortic Repair”, arXiv preprint arXiv:1809. Arie-Nachimson & Basri [4] build 3D models of rigid objects and exploit these models to estimate 3D pose from a 2D image as well as a collection of 3D latent features and visibility properties. Fast and Robust Multi-Person 3D Pose Estimation from Multiple Views Junting Dong, Wen Jiang, Qixing Huang, Hujun Bao, Xiaowei Zhou. Derpanis, and K. Presentation slides (55MB pdf) Introduction. PoseNet: A Convolutional Network for Real-Time 6-DOF Camera Relocalization Alex Kendall Matthew Grimes University of Cambridge agk34, mkg30, rc10001 @cam. zip Download. Two point Clouds , will be obtained. Daniilidis. a PyTorch implementation of the general pipeline for 2D single human pose estimation. edu This document serves as a supplement to the material discussed in lectures 11 and 12. OriNet: A Fully Convolutional Network for 3D Human Pose Estimation. org] [Computer Vision / Perception] New Computer Vision Message Standards. The 3D rotation of the object is estimated by regressing to a quaternion representation. The task of 3D human pose estimation is a particularly interesting example of this sim2real problem, because learning-based approaches perform reasonably well given real training data, yet labeled 3D poses are extremely difficult to obtain in the wild, limiting scalability. finds the pose of the detected objects in RGB-D images. Kalman Filter for Motorbike Lean Angle Estimation Also know as the Gimbal Stabilization problem: You can measure the rotationrate, but need some validation for the correct lean angle from time to time, because simply an integration of the rotationrate adds up a lot of noise. Using only the existing 3D pose data and 2D pose data, we show state-of-the-art performances on established benchmarks. Compute extrinsic parameters given intrinsic parameters, a few 3D points, and their projections. The important thing to note here is that operator() is a templated method, which assumes that all its inputs and outputs are of some type T. 3D pose for human body pose estimation [26, 11, 16]. 3D hand pose estimation challenge [54]. Real-time Head Pose and Facial Landmark Estimation from Depth Images Using Triangular Surface Patch Features Papazov, C. of IEEE Virtual Reality (VR), Arles, France, 2015 (full paper, accept rate=13. See also a follow-up project which includes all the above as well as mid-level facial details and occlusion handling: Extreme 3D face reconstruction Available also as a docker for easy install. We parame-terize the 3D model of each object with 9 control points. js GitHub repository. In this work we study the use of 3D hand poses to recognize first-person dynamic hand actions interacting with 3D objects. tf-pose-estimation github. We will discuss the recent advances on instance-level recognition from images and videos, covering in detail the most recent work in the family of visual recognition tasks. We observe, however, that the occlusion/visibility statuses of overlapping pedestrians provide useful mutual relationship for visibility estimation - the visibility estimation of one pedestrian facilitates the visibility estimation of another. He is currently a lecturer with the Faculty of Information Engineering, China University of Geosciences, China. 3D shape tracking. Tutorial: Resource estimation with PyGSLIB¶ This tutorial will guide you on doing resource estimation with PyGSLIB. wetzstein@stanford. First of all, the pose estimation is in 2D image space, not in 3D space. We present a new method that matches RGB images to rendered depth images of CAD models for object pose estimation. Figure 1: This paper proposes a deep neural architecture for piece-wise planar depthmap reconstruction from a single RGB image. Please cite the following paper if you use this dataset: Romain Brégier, Frédéric Devernay, Laetitia Leyrit and James L. In this series we will dive into real time pose estimation using openCV and Tensorflow. We introduce a dense encoder-decoder architecture that learns implicit representations of 3D object orientations. Paper Link : / Github Link : [Github Link] Abstract - 이 논문에서는 새롭게 제안된 panoptic segmentation task를 위한 Unified Panoptic Segmentation Network (UPSNet)을 제안합니다. This paper addresses the problem of 3D human pose estimation from single images. Towards Viewpoint Invariant 3D Human Pose Estimation ECCV Albert Haque, Boya Peng, Zelun Luo, Alexandre Alahi, Serena Yeung, Li Fei-Fei 2016 + Vision-Based Hand Hygiene Monitoring in Hospital NIPS Workshop Serena Yeung, Alexandre Alahi, Zelun Luo, Boya Peng, Albert Haque, Li Fei-Fei 2016 Experience + Graduate Teaching Assistant Stanford, CA. He received his Ph. It also contains a case study for 3D pose estimation in cheetahs. All of the functions for creating 3D models live in the scad-clj. We argue that 3D human pose estimation from a monocular. He is currently a lecturer with the Faculty of Information Engineering, China University of Geosciences, China. From left to right, an input image, a piece-wise planar segmentation, a reconstructed depthmap, and a texture-mapped 3D model. Daniilidis. More details (how to obtain dataset, instructions, evaluation, contact etc. My work has been transferred to Kinect Identity in XBox, Windows Hello, Microsoft Cognitive Service, Bing, Office, and Microsoft XiaoIce, etc. Outputs an ‘unproject’ matrix, linking each pixel with the coordinates the 3D surface point projected onto that pixel; it is therefore ideal for calibration / pose estimation using a 3D model as reference. In submission. We build on the approach of state-of-the-art methods which formulate the problem as 2D keypoint detection followed by 3D pose estimation. Generic 3D Representation via Pose Estimation and Matching. 11/08/2017 - Colloquium talk about 6D object pose estimation at the Tampere University of Technology. Developers. We contribute a large scale database for 3D object recognition, named ObjectNet3D, that consists of 100 categories, 90,127 images, 201,888 objects in these images and 44,147 3D shapes. Through extensive experiments, we find that the proposed method is able to achieve compelling transfer performance across the datasets with domain discrepancy from small scale to large scale. An Integral Pose Regression System for the ECCV2018 PoseTrack Challenge. Jacobs, Michael J. tjaden, ulrich. Lifting from the Deep. The COCO DensePose Challenge is designed to push the state of the art in dense human pose estimation. Direct prediction of 3D body pose and shape remains a challenge even for highly parameterized deep learning models. zip Download.