Skip to content

Microsoft Researchers Introduce Table-GPT: Elevating Language Models to Excel in Two-Dimensional Table Understanding and Tasks Tanya Malhotra Artificial Intelligence Category – MarkTechPost

  • by

With the recent developments in the field of Artificial intelligence, Large Language Models, including GPT and LLaMa, are continuously showing remarkable performance over a broad spectrum of natural language tasks. These models have been proven effective in various domains and have advanced the field of Natural Language processing to a great extent. Language models are capable of taking directions from humans and carrying out different jobs. However, there comes a drawback, which is that these models have difficulty with tasks involving the knowledge of tables. This is because their primary training is one-dimensional natural language texts, whereas tables are two-dimensional structures, which accounts for this constraint.

To address this issue, a team of researchers has proposed the concept of table-tuning, an innovative way to alleviate this issue. This method entails further training or optimizing pre-existing language models, such as GPT-3.5 and ChatGPT, using a wide range of table-related tasks derived from actual tables. Enhancing these language models’ capacity to understand and manipulate tables is the main objective of table-tuning.

The Table-GPT models, which have been generated through table-tuning, exhibit improved capabilities in understanding tables. These models have consistently outperformed the standard GPT-3.5 and ChatGPT on a wide range of table-based tasks. This means they can more accurately interpret and manipulate tabular data. The Table-GPT models retain a high degree of generalizability even if they are specialized in table jobs. They are able to adjust to new activities involving tables because they can react to a range of human directions with effectiveness. This flexibility is comparable to ChatGPT’s capacity to manage a variety of natural language jobs and the original GPT-3.5.

The primary contributions have been summarized as follows.

Table-Tuning Paradigm: Table-Tuning paradigm has been introduced, which involves training language models one more time with the express purpose of improving their efficiency in tasks involving tables. It employs a variety of table-based jobs that are synthesized from actual tables using a synthesize-then-augment methodology.

Data Augmentation approaches: Task-level, table-level, instruction-level, and completion-level data augmentation approaches have been developed at different levels. These methods are essential for maintaining Table-GPT’s generalizability and preventing overfitting. By adding value to the training set, they strengthen the model.

Performance in Table-Tasks: Out of the box, Table-GPT exhibits exceptional competence in table-based tasks in both zero-shot and few-shot scenarios. This indicates that the model can perform these tasks quite well, even with little in the way of specialized training or examples.

Table-GPT’s adaptability makes it suitable for use as a table foundation model. When it comes to downstream single-task optimizations such as task-specific fine-tuning and prompt engineering, it can be a better place to start than the vanilla GPT. This demonstrates how useful it is for a variety of purposes outside of table work.

In summary, the suggested table-tuning paradigm provides a way to overcome the difficulty of teaching language models how to use tables. It improves their comprehension of two-dimensional data structures and gives them the tools they need to succeed in a wide range of table-related jobs, both well-known and unknown.

Check out the Paper. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

We are also on WhatsApp. Join our AI Channel on Whatsapp..

The post Microsoft Researchers Introduce Table-GPT: Elevating Language Models to Excel in Two-Dimensional Table Understanding and Tasks appeared first on MarkTechPost.

 With the recent developments in the field of Artificial intelligence, Large Language Models, including GPT and LLaMa, are continuously showing remarkable performance over a broad spectrum of natural language tasks. These models have been proven effective in various domains and have advanced the field of Natural Language processing to a great extent. Language models are
The post Microsoft Researchers Introduce Table-GPT: Elevating Language Models to Excel in Two-Dimensional Table Understanding and Tasks appeared first on MarkTechPost.  Read More AI Shorts, Applications, Artificial Intelligence, Computer Vision, Editors Pick, Language Model, Large Language Model, Staff, Tech News, Technology, Uncategorized 

Leave a Reply

Your email address will not be published. Required fields are marked *