科學研究

                Research

                首頁 >  論文  > 詳情

                InternGPT: Solving Vision-Centric Tasks by Interacting with ChatGPT Beyond Language

                發表會議及期刊:arXiv

                Zhaoyang Liu?1, Yinan He?1, Wenhai Wang??1, Weiyun Wang?1, Yi Wang?1, Shoufa Chen?2,1 Qinglong Zhang?1, Yang Yang1, Qingyun Li1, Jiashuo Yu1, Kunchang Li3,1, Zhe Chen4,1, Xue Yang1, Xizhou Zhu5,1, Yali Wang3,1, Limin Wang4,1, Ping Luo2,1, Jifeng Dai6,1, Yu Qiao1

                 1OpenGVLab, Shanghai AI Laboratory     2The University of Hong Kong     3Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences     4Nanjing University     5SenseTime Research 6Tsinghua University


                https://github.com/OpenGVLab/InternGPT 

                We’re going to use the best pointing device in the world. We’re going to use a pointing device that we’re all born with — born with ten of them. We’re going to use our fingers. We’re going to touch this with our fingers. — Steve Jobs

                Abstract

                We present an interactive visual framework named InternGPT, or iGPT for short. The framework integrates chatbots that have planning and reasoning capabilities, such as ChatGPT, with non-verbal instructions like pointing movements that enable users to directly manipulate images or videos on the screen. Pointing (including gestures, cursors, etc.) movements can provide more flexibility and precision in performing vision-centric tasks that require fine-grained control, editing, and generation of visual content. The name InternGPT stands for interaction, nonverbal, and chatbots. Different from existing interactive systems that rely on pure language, by incorporating pointing instructions, the proposed iGPT significantly improves the efficiency of communication between users and chatbots, as well as the accuracy of chatbots in vision-centric tasks, especially in complicated visual scenarios where the number of objects is greater than 2. Additionally, in iGPT, an auxiliary control mechanism is used to improve the control capability of LLM, and a large vision-language model termed Husky is fine-tuned for high-quality multi-modal dialogue (impressing ChatGPT-3.5-turbo with 93.89% GPT-4 Quality). We hope this work can spark new ideas and directions for future interactive visual systems. 

                comm@pjlab.org.cn

                上海市徐匯區云錦路701號西岸國際人工智能中心37-38層

                滬ICP備2021009351號-1

                        
                        

                              拔萝卜又叫又疼原声视频