Yifeng zhu ut austin. edu, Dana Ballard UT Austin danab@utexas.

Yifeng zhu ut austin. in in Automation, Zhejiang University, 2018 .
Yifeng zhu ut austin edu/~yukez yukez@cs. edu Yifeng Zhu UT Austin yifeng. I’m passionate View Yifeng Zhu’s profile on LinkedIn, a professional community of 1 billion members. Publication Topics Real Robot,Imitation Learning,Policy Weikang Wan 1, 2 Yifeng Zhu* 1 Rutav Shah* 1 Yuke Zhu 1 . edu Yifeng Zhu's 16 research works with 85 citations and 375 reads, including: VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors We follow the latest robotics and embodied AI papers, spanning computer vision, reinforcement learning, neuro-symbolic AI, foundation models, and control. Camera Installation. Yifeng Zhu The University of Texas at Austin Verified email at utexas. edu ACADEMIC EMPLOYMENT Assistant Professor, Department of Computer Science 2020 { Present The University of Texas at Austin, Austin, TX, USA EDUCATION Stanford University, Stanford, CA, USA 2015 { 2019 Ph. zhu [at] utexas. There are three primary objectives for the course: To provide a broad survey of AI and Intelligent Systems; To develop a deeper understanding of several major topics in AI; Nov 8, 2023 · Yifeng Zhu, Zhenyu Jiang, Peter Stone, Yuke Zhu Conference on Robot Learning (CoRL), November 2023 We introduce GROOT, an imitation learning method for learning robust policies with object-centric and 3D priors. Our approach constructs object-centric representations based on general object proposals from *Bo Liu, *Yifeng Zhu, *Chongkai Gao, Yihao Feng, Qiang Liu, Yuke Zhu, Peter Stone NeurIPS 2023 (oral presentation at RAP4Robots, ICRA 2023, and TGR, CoRL 2023) paper / code / website Learning Objective. in Computer Science and Engineering from the University of Nebraska in 2002 and 2005 respectively. Candidate in Computer Science at the University of Texas at Austin, co-advised by Prof. Yuke Zhu and Prof. 2\%$ without the need for labor-intensive teleoperation. Student at The University of Texas at Austin · Education: The University of Texas at Austin · Location: Austin. LARG research is supported in part by NSF (CPS-1739964, IIS-1724157, NRI-1925082), ONR (N00014-18-2243), FLI (RFP2-000), ARO (W911NF-19-2-0333), DARPA, Lockheed Martin, GM, and Current software engineering student at UT Austin · Experience: Oracle · Education: UT Austin · Location: Austin · 231 connections on LinkedIn. Affiliation. io/OKAMI/. More videos can be found on our website https://ut-austin-rpl. He received his Ph. 12293, September 2020 robosuite is a simulation framework for robot learning powered by the MuJoCo physics engine. Student, CS (co-advised w/ Peter Stone) yifeng. Contact I am a Ph. io/VIOLA/. I enjoy interacting with people both inside and outside my field. (RPL) at UT Austin. Project | Paper | Video. ‪The University of Texas at Austin‬ - ‪‪引用次数:1,598 次‬‬ - ‪Robot Learning‬ Yifeng Zhu. More videos and model details can be found in supplementary material and the project website: https://ut-austin-rpl. If they certify your needs, I will work with you to make appropriate arrangements. Torabi [2021] F. My research lies on the intersection of robot learning, robot control, and robot planning. in Computer Science Yifeng Zhu and Abhishek Joshi and Peter Stone and Yuke Zhu We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. Stone, and Y. 21-30, September, 2007, Austin, TX, pp. For more information, check out the TEROS website at: https://teros-texas. in Physics, The University of Texas at Austin, 2021 B. Yifeng Zhu, Zhenyu Jiang, Peter Stone, Yuke Zhu. VIOLA uses a transformer-based policy to reason over these representations and attend to the task-relevant visual factors for action Sep 25, 2020 · Yuke Zhu, Josiah Wong, Ajay Mandlekar, Roberto Martín-Martín, Abhishek Joshi, Soroush Nasiriany, Yifeng Zhu Technical report arXiv:2009. Deoxys is developed by Yifeng Zhu and aims to democratize basic knowledge of robot manipulation to the robot learning community through open-sourcing the controller implementation. edu ACADEMIC EMPLOYMENT Assistant Professor, Department of Computer Science 2020 – Present The University of Texas at Austin, Austin, TX, USA EDUCATION Stanford University, Stanford, CA, USA 2015 – 2019 Ph. MUTEX: Learning Unified Policies from Multimodal Task Specifications. edu Ruohan Zhang Stanford University zharu@stanford. This class is intended for graduate students and ambitious undergraduates who are passionate about the emerging technologies at the intersection of Robotics and AI, especially for those who seek research opportunities in this subject area. in Computer Science UT Austin sguo19@utexas. I am a Ph. , author={Shah, Rutav and Yu, Albert and Zhu, Yifeng and Zhu, Yuke and Mart Mingyo Seo 1 Ryan Gupta 1 Yifeng Zhu 1 Alexy Skoutnev 2 Luis Sentis 1 Yuke Zhu 1 1 The University of Texas at Austin 2 Vanderbilt University IEEE International Conference on Robotics and Automation (ICRA), 2023 Paper | Code GIGA (Grasp detection via Implicit Geometry and Affordance) is a network that jointly detects 6 DOF grasp poses and reconstruct the 3D scene. RPL research has been partially supported by the National Science Foundation (CNS Nov 6, 2024 · Zhenyu Jiang*, Yuqi Xie*, Jinhan Li, Ye Yuan, Yifeng Zhu, Yuke Zhu Conference on Robot Learning (CoRL), November 2024 Humanoid robots, with their human-like embodiment, have the potential to integrate seamlessly into human environments. My research goal is to develop scalable skill-learning frameworks for diverse and complex robotic embodiments. ‪The University of Texas at Austin‬ - ‪‪Cited by 1,601‬‬ - ‪Robot Learning‬ I am a Ph. ‪University of Texas at Austin‬ - ‪‪Cited by 822‬‬ - ‪Artificial Intelligence‬ Yuke Zhu The University of Texas at Austin, Yifeng Zhu The Affiliations: [The University of Texas at Austin]. The University of Texas at Austin September 28, 2020 Yifeng Zhu cs391R - Robot Learning Online September 28, 2020 1 / 9 Yifeng Zhu cs391R - Robot Learning Online Jul 15, 2024 · UT Austin Robot Perception and Learning Lab Home People Robots Publications Research Teaching Opportunities. [2023] Y. The University of Texas at Austin provides upon request appropriate academic accommodations for qualified students with disabilities. Welcome to deoxys-vision documentation! next. We survey papers from a broad set of conferences, including (but not limited to) CoRL, RSS, and CVPR. Weikang and Zhu, Yifeng and Shah, Rutav and Zhu, Yuke}, booktitle = {IEEE May 13, 2024 · Weikang Wan, Yifeng Zhu*, Rutav Shah*, Yuke Zhu IEEE International Conference on Robotics and Automation (ICRA), May 2024 We introduce LOTUS, a continual imitation learning algorithm that empowers a physical robot to continuously and efficiently learn to solve new manipulation tasks throughout its lifespan. Previously, I was fortunate to work in Professor Yi Wu’s group at IIIS. Torabi. Names. Nov 2, 2020 · UT Austin sguo19@utexas. Yifeng Zhu, Zhenyu Jiang, Peter Stone, and Yuke Zhu. Imitation Learning from Observation. Zhu, Z. Joined ; May 2021. previous. I was fortunate to work as a student intern in Professor Yuke Zhu’s group at UT Austin. Through this course, students will: understand the Yuke Zhu's 129 research works with 11,811 citations and 5,009 reads, including: Voyager: An Open-Ended Embodied Agent with Large Language Models Sep 10, 2022 · It has also been deployed successfully on a physical robot to solve challenging long-horizon tasks, such as dining table arrangement and coffee making. Deoxys is a modularized, real-time controller library for Franka Emika Panda to facilitate robot learning research. Zhu and Z. Muzhi Han, Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, Yuke Zhu Robotics: Science and Systems (RSS), July 2024 Paper Project Video Blog Code Jun 3, 2021 · Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Yuke Zhu IEEE International Conference on Robotics and Automation (ICRA), May 2021 We present a visually grounded hierarchical planning algorithm for long-horizon manipulation tasks. It uses a transformer-based policy to reason Yifeng Zhu 1, Abhishek Joshi , Peter Stone;2, Yuke Zhu 1The University of Texas at Austin 2Sony AI Abstract: We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. edu, Abstract I am an Assistant Professor in the Department of Computer Science at the University of Texas at Austin and the director of the Robot Perception and Learning (RPL) Lab. VIOLA May 25, 2022 · Xiaohan Zhang, Yifeng Zhu, Yan Ding, Yuke Zhu, Peter Stone, Shiqi Zhang IEEE International Conference on Robotics and Automation (ICRA), May 2022 Task and motion planning (TAMP) algorithms aim to help robots achieve task-level goals, while maintaining motion-level feasibility. zhu@utexas. I am a CS PhD student at UT Austin, advised by Professor Yuke Zhu at the Robot Perception and Learning Lab. 422, 2317 Speedway, Austin, TX, 78712 USA https://cs. The University of Texas at Austin. Our approach constructs object-centric representations based on general object proposals from UT Austin sguo19@utexas. edu Chris Choy NVIDIA Verified email at nvidia. Contribute to UT-Austin-RPL/GROOT development by creating an account on GitHub. in Physics RPL research has been partially supported by NSF CNS-1955523, the MLL Research Award from the Machine Learning Laboratory at UT-Austin, and the Amazon Research Awards. Yifeng Zhu, Zhenyu Jiang Fall 2021, Teaching Assistant, CS391R Robot Learning, UT Austin Spring 2023, Teaching Assistant, CS391R Oct 14, 2024 · Rutav Shah, Albert Yu, Yifeng Zhu, Yuke Zhu†, Roberto Martín-Martín† Technical report arXiv:2410. S. We present a visually grounded For real-robot experiments, we used Deoxys, a controller library for Franka Emika Panda developed by Yifeng Zhu. Yifeng Zhu. View Yifeng Zhu’s profile on LinkedIn, a professional community of 1 billion Nov 6, 2024 · Jinhan Li, Yifeng Zhu*, Yuqi Xie*, Zhenyu Jiang*, Mingyo Seo, Georgios Pavlakos, Yuke Zhu Conference on Robot Learning (CoRL), November 2024 We study the problem of teaching humanoid robots manipulation skills by imitating from single video demonstrations. edu Abstract In the future, robots will be pretrained with a diverse set of skills in factory before deployment. Candidate, University of Texas at Austin · Experience: Toyota Research Institute · Education: The We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. GIGA takes advantage of deep implicit functions, a continuous and memory-efficient representation, to enable differentiable training of both tasks. Contribute to UT-Austin-RobIn/BUMBLE development by creating an account on GitHub. © Copyright ECE498 Real-time Operating Systems (Fall 2015) ECE473 Computer Architecture and Organization (Fall 2015); ECE498 Introduction to Matlab Programming (Spring 2015) The University of Texas at Austin September 28, 2020 Yifeng Zhu cs391R - Robot Learning Online September 28, 2020 13 / 35. Julen Urain TU Darmstadt. Dec 15, 2022 · Yifeng Zhu, Abhishek Joshi, Peter Stone, Yuke Zhu Conference on Robot Learning (CoRL), December 2022 We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. My research interests center on machine learning and robot learning, and my long-term goal is to create intelligent robots that can be reliable companions for human. and M. Learning Generalizable Manipulation Policies with Object Yuke Zhu GDC 3. However, we believe it is also a must for them to learn with their human users throughout their lifetime to really become a personalized embodied agent. io/ . Our approach constructs object-centric representations based on general object proposals from a pre-trained vision model. Through this course, students will: understand the Jun 3, 2021 · Guanya Shi, Yifeng Zhu, Jonathan Tremblay, Stan Birchfield, Fabio Ramos, Anima Anandkumar, Yuke Zhu IEEE International Conference on Robotics and Automation (ICRA), May 2021 Deep learning-based object pose estimators are often unreliable and overconfident especially when the input image is outside the training domain, for instance, with This work has taken place in the Robot Perception and Learning Group (RPL) and Learning Agents Research Group (LARG) at UT Austin. Yifeng(Daisy) Zhu San Francisco Bay Area To operate at a building scale, service robots must perform very long-horizon mobile manipulation tasks by navigating to different rooms, accessing different floors, and interacting with a wide and unseen range of everyday objects. Sep 5, 2024 · Furthermore, OKAMI rollout trajectories are leveraged to train closed-loop visuomotor policies, which achieve an average success rate of $79. Our method starts with constructing a Contribute to UT-Austin-RPL/Lotus development by creating an account on GitHub. Bernstein Associate Professor, Stanford University Verified email at cs. Mohit Shridhar Dyson Robotics. 422, 2317 Speedway, Austin, TX, 78712 USA https://yukezhu. To tackle these inherently long-horizon tasks, we propose BUMBLE, a unified VLM-based framework We tackle real-world long-horizon robot manipulation tasks through skill discovery. stanford. For more details, please see my CV. PhD thesis, University of Texas at Austin, 2021. Yifeng Zhu UT Austin Overview#. Cai (Best Paper Award) Evaluating Memory Energy Efficiency in Parallel I/O Workloads, in Proceedings of IEEE International Conference on Cluster Computing, pp. Eng. UT Austin sguo19@utexas. RPL research has been partially supported by the National Science Foundation (CNS-1955523, FRR-2145283), the Office of Naval Research (N00014-22-1-2204), and the Amazon Research Awards. View Yifeng Zhu’s profile on LinkedIn, a professional community of 1 billion members. Sihang Guo, Ruohan Zhang, Bo Liu, Yifeng Zhu, Mary Hayhoe, Dana Ballard, Peter Stone Advances in Neural Information Processing Systems (NeurIPS), 2021, 2021 paper . Yifeng Zhu, Song-Chun Zhu, Ying Nian Wu, Yuke Zhu Yifeng Zhu 1, Peter Stone,2, Yuke Zhu1 Abstract—We tackle real-world long-horizon robot manip-ulation tasks through skill discovery. Yifeng(Daisy) Zhu San Francisco Bay Area Yifeng Zhu; Chongkai Gao The UT Austin Villa team, from the University of Texas at Austin, won the 2021 RoboCup 3D Simulation League, winning all 19 games the team played. 2020 – 2023 [Documentation] Deoxys is a modular, real-time controller library for Franka Emika Panda arm, aiming to facilitate a wide range of robot learning research. Zhu. Yifeng Zhu (entered Fall 2019) B. To determine if you qualify, please contact the Dean of Students at 471-6529; 471-4641 TTY. View Yicheng Zhu’s profile on LinkedIn, a @inproceedings{corl2022-zhu, title={VIOLA: Imitation Learning for Vision-Based Manipulation with Object Proposal Priors}, author={Yifeng Zhu and Abhishek Joshi and Peter Stone and Yuke Zhu}, booktitle={Proceedings of the 6th Conference on Robot Learning (CoRL 2022)}, location = {Auckland, New Zealand}, month={December}, year={2022}, doi Deoxys has boosted many past research in the Robot Perception and Learning Group at the University of Texas, Austin. Robovat - Basic FunctionalityI Yifeng Zhu – Stetson Professor, Chair Room 101, Barrows Hall (207) 581-2499 Fax: 207-581-4531. Yuke Zhu GDC 3. Yifeng Zhu 1, Abhishek Joshi , Peter Stone,2, Yuke Zhu 1The University of Texas at Austin 2Sony AI Abstract: We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. During the course of Peter STONE, Professor | Cited by 24,085 | of University of Texas at Austin, TX (UT) | Read 574 publications | Contact Peter STONE. utexas. D. me yukez@cs. edu Bo Liu UT Austin bliu@cs. edu This work has taken place in the Robot Perception and Learning Group (RPL) and Learning Agents Research Group (LARG) at UT Austin. GROOT builds policies that generalize beyond their initial training conditions for vision-based manipula-tion. 06237, October 2024 To operate at a building scale, service robots must perform very long-horizon mobile manipulation tasks by navigating to different rooms, accessing different floors, and interacting with a wide and unseen range of everyday May 31, 2024 · Yifeng Zhu, Arisrei Lim, Peter Stone, Yuke Zhu Technical report arXiv:2405. Rutav Shah, Roberto Martín View Yifeng Zhu’s profile on LinkedIn, a professional community of 1 billion members. Yuke Zhu . edu, Mary Hayhoe UT Austin hayhoe@utexas. edu Abstract Evaluating Memory Energy Efficiency in Parallel I/O Workloads, in Proceedings of IEEE International Conference on Cluster Computing, pp. Cai (Best Paper Award) Oct 20, 2022 · We introduce VIOLA, an object-centric imitation learning approach to learning closed-loop visuomotor policies for robot manipulation. Yifeng Zhu The University of Texas at Austin (utexas. PhD Thesis. UT Austin Villa: An Open-source Gym Environment forDeep Reinforcement Learning on 3DSim Bo Liu, William Macke, Caroline Wang, Yifeng Zhu, Patrick MacAlpine, Peter Stone Yifeng Zhu. CoRL 2023. ‪PhD student, University of Texas at Austin‬ - ‪‪Cited by 734‬‬ - ‪Computer vision‬ - ‪robotics‬ Yifeng Zhu The University of Texas at Austin @article{jiang2021synergies, author = {Jiang, Zhenyu and Zhu, Yifeng and Svetlik, Maxwell and Fang, Kuan and Zhu, Yuke}, journal = {Robotics: science and systems}, title = {Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations}, year = {2021} }. Yifeng Zhu - Graduate Student (2020) Technical Report AI11-10, The University of Texas at Austin, Department of Computer Science, AI Laboratory, December 2011. Previously, I recieved my undergraduate and master's degrees from UC Berkeley, where I was advised by Professor Sergey Levine. Zhenyu Jiang* 1,2 Yuqi Xie* 1,2 Jinhan Li 1 Ye Yuan 2 Yifeng Zhu 1 Yuke Zhu 1,2 1 The University of Texas at Austin 2 NVIDIA Research *: Equal Contributions Conference on Robot Learning (CoRL), 2024 Arxiv | Twitter Summary Mar 16, 2022 · Additional thanks to our organizers, Yuke Zhu, Joydeep Biswas, Michelle Garel, Yifeng Zhu, and Josh Hoffman for bringing us this fantastic event. edu, Abstract Jan 27, 2022 · Bottom-Up Skill Discovery from Unsegmented Demonstrations for Long-Horizon Robot Manipulation Jan 27, 2022 1 minute read Yifeng Zhu, Peter Stone, Yuke Zhu @inproceedings{seo2022prelude, title={Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments}, author={Seo, Mingyo and Gupta, Ryan and Zhu, Yifeng and Skoutnev, Alexy and Sentis, Luis and Zhu, Yuke}, booktitle={IEEE International Conference on Robotics and Automation (ICRA)}, year={2023} } Dec 10, 2023 · Bo Liu*, Yifeng Zhu*, Chongkai Gao*, Yihao Feng, Qiang Liu, Yuke Zhu, Peter Stone NeurIPS 2023 Datasets and Benchmarks Track, December 2023 Lifelong learning offers a promising paradigm of building a generalist agent that learns and adapts over its lifespan. In 7th Annual Conference on Robot Learning, 2023. GIGA takes UT Austin Villa Publications • Sorted by Date • Classified by Publication Type • Classified by Topic • Sorted by First Author Last Name • Learning Generalizable Manipulation Policies with Object-Centric 3D Representations. The core idea behind LOTUS is constructing an ever-growing skill library from a sequence Yifeng Zhu is Professor in the Department of Electrical and Computer Engineering of the College of Engineering at the University of Maine. Learning generalizable manipulation policies with object-centric 3d representations. Citation @inproceedings { liu2022robot , title = { Robot Learning on the Job: Human-in-the-Loop Autonomy and Learning During Deployment } , author = { Huihan Liu and Soroush Nasiriany and Lance Zhang and Zhiyao Bao and Yuke Zhu Yifeng Zhu, Abhishek Joshi, Peter Stone, and Yuke Zhu, In Proceedings of the 6th Conference on Robot Learning (CoRL 2022) The University of Texas at Austin. edu) Yuke Zhu. Yifeng Zhu 1, Zhenyu Jiang , Peter Stone,2, Yuke Zhu 1The University of Texas at Austin 2Sony AI Abstract: We introduce GROOT, an imitation learning method for learning ro-bust policies with object-centric and 3D priors. MimicPlay: Long-Horizon Imitation Learning by Watching Human Play. I am also a principal research scientist at NVIDIA Research, where I co-lead the Generalist Embodied Agent Research (GEAR) group. in in Automation, Zhejiang University, 2018 M. @inproceedings {okami2024, title = {OKAMI: Teaching Humanoid Robots Manipulation Skills through Single Video Imitation}, author = {Jinhan Li and Yifeng Zhu and Yuqi Xie and Zhenyu Jiang and Mingyo Seo and Georgios Pavlakos and Yuke Zhu}, booktitle = {8th Annual Conference on Robot Learning (CoRL)}, year = {2024}} Weikang Wan 1,2, Yifeng Zhu ∗, Rutav Shah , Yuke Zhu Abstract—We introduce LOTUS, a continual imitation learning algorithm that empowers a physical robot to contin-uously and efficiently learn to solve new manipulation tasks throughout its lifespan. Yue, Y. We present a bottom-up approach to learning a library of reusable skills from unsegmented demonstrations and use these skills to synthesize prolonged robot behaviors. Chen Wang, Linxi “Jim” Fan, Jiankai Sun, Ruohan Zhang, Li Fei-Fei, Danfei Xu, Yuke Zhu, Anima Anandkumar. Stanford University. Edward Johns Imperial College London. edu, Peter Stone UT Austin, Sony AI pstone@cs. Abstract. Our method starts with constructing Yifeng Zhu 1,2 Jonathan Tremblay 2 Stan Birchfield 2 Yuke Zhu 1,2 1 The University of Texas at Austin 2 NVIDIA Paper. By Yifeng Zhu, Zhenyu Jiang © Copyright 2023, Yifeng Zhu, Zhenyu Jiang. edu, Abstract May 29, 2023 · Learning to Walk by Steering: Perceptive Quadrupedal Locomotion in Dynamic Environments May 29, 2023 1 minute read Mingyo Seo, Ryan Gupta, Yifeng Zhu, Alexy Skoutnev, Luis Sentis, Yuke Zhu Jul 19, 2021 · Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yuke Zhu Robotics: Science and Systems (RSS), July 2021 Grasp detection in clutter requires the robot to reason about the 3D scene from incomplete and noisy perception. Peter Stone and Prof. student at the University of Texas at Austin, advised by Prof. github. Luis Sentis. Home; Yifeng Zhu; Yan Ding May 30, 2024 · Zhu et al. Student at The University of Texas at Austin Austin, TX. 21-30, with J. RPL research has been partially supported by the National Science Foundation (FRR2145283, EFRI-2318065) and the Office of Naval Research (N00014-22-1-2204). Ph. This work has taken place in the Robot Perception and Learning Group (RPL) and Learning Agents Research Group (LARG) at UT Austin. Jiang, P. 20321, May 2024 ORION: Vision-based Manipulation from Single Human Video with Open-World Object Graphs Yifeng Zhu, Arisrei Lim, Peter Stone, Yuke Zhu We present an object-centric approach to empower robots to learn vision-based manipulation skills from human videos. In the future, robots will be pretrained with a diverse set of skills in factory before deployment. Here is a gallery of past research for your reference. We refer to these tasks as Building-wide Mobile Manipulation. For full list of my publications, please see my google scholar profile. com Michael S. edu, Dana Ballard UT Austin danab@utexas. ypu irgldt xhbjkv axyv bvtrtxqz zkfd ntxix ckob zylh yzetcbz
{"Title":"What is the best girl name?","Description":"Wheel of girl names","FontSize":7,"LabelsList":["Emma","Olivia","Isabel","Sophie","Charlotte","Mia","Amelia","Harper","Evelyn","Abigail","Emily","Elizabeth","Mila","Ella","Avery","Camilla","Aria","Scarlett","Victoria","Madison","Luna","Grace","Chloe","Penelope","Riley","Zoey","Nora","Lily","Eleanor","Hannah","Lillian","Addison","Aubrey","Ellie","Stella","Natalia","Zoe","Leah","Hazel","Aurora","Savannah","Brooklyn","Bella","Claire","Skylar","Lucy","Paisley","Everly","Anna","Caroline","Nova","Genesis","Emelia","Kennedy","Maya","Willow","Kinsley","Naomi","Sarah","Allison","Gabriella","Madelyn","Cora","Eva","Serenity","Autumn","Hailey","Gianna","Valentina","Eliana","Quinn","Nevaeh","Sadie","Linda","Alexa","Josephine","Emery","Julia","Delilah","Arianna","Vivian","Kaylee","Sophie","Brielle","Madeline","Hadley","Ibby","Sam","Madie","Maria","Amanda","Ayaana","Rachel","Ashley","Alyssa","Keara","Rihanna","Brianna","Kassandra","Laura","Summer","Chelsea","Megan","Jordan"],"Style":{"_id":null,"Type":0,"Colors":["#f44336","#710d06","#9c27b0","#3e1046","#03a9f4","#014462","#009688","#003c36","#8bc34a","#38511b","#ffeb3b","#7e7100","#ff9800","#663d00","#607d8b","#263238","#e91e63","#600927","#673ab7","#291749","#2196f3","#063d69","#00bcd4","#004b55","#4caf50","#1e4620","#cddc39","#575e11","#ffc107","#694f00","#9e9e9e","#3f3f3f","#3f51b5","#192048","#ff5722","#741c00","#795548","#30221d"],"Data":[[0,1],[2,3],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[10,11],[12,13],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[6,7],[8,9],[10,11],[12,13],[16,17],[20,21],[22,23],[26,27],[28,29],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[14,15],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[0,1],[2,3],[32,33],[4,5],[6,7],[8,9],[10,11],[12,13],[36,37],[14,15],[16,17],[18,19],[20,21],[22,23],[24,25],[26,27],[28,29],[34,35],[30,31],[2,3],[32,33],[4,5],[6,7]],"Space":null},"ColorLock":null,"LabelRepeat":1,"ThumbnailUrl":"","Confirmed":true,"TextDisplayType":null,"Flagged":false,"DateModified":"2020-02-05T05:14:","CategoryId":3,"Weights":[],"WheelKey":"what-is-the-best-girl-name"}