Salesforce Open Sources NLP Python Library TaiChi

The package does not require users to have a high level of FSL knowledge The Salesforce research team has open-sourced its few-shot NLP package named TaiChi.The package does not require users to have a high level of FSL knowledge. It is intended for data scientists and software engineers who wish to construct proof-of-concept products or […]

Topics

  • The package does not require users to have a high level of FSL knowledge

    The Salesforce research team has open-sourced its few-shot NLP package named TaiChi.The package does not require users to have a high level of FSL knowledge. It is intended for data scientists and software engineers who wish to construct proof-of-concept products or get some quick results but have little experience with few-shot learning (FSL).

    The Salesforce research team’s goal was to train models with good performance with little data. Inspired by this, they created an FSL library, which employs clever techniques to get good performance with minimal effort. They hope it may aid others in their model training in low-data settings.

    The library dramatically lowers the barrier to learning and using the most recent FSL methods by abstracting sophisticated FSL methods into Python objects that can be accessed by just one or two lines of code. Even with a limited number of samples, models can be trained.

    The researchers identified two basic models for DNNC and USLP: nli-pretrained-roberta-base (English only model) and nli-pretrained-xlm-roberta-base (supports 100 languages and can be utilised for multi/cross-lingual applications). These models were modified with the NLI dataset to make them appropriate for NLI-style categorisation. They are based on publicly available pre-trained models from Huggingface.

    Both DNNC and USLP are based on NLI-style classification. But, USLP condenses DNNC by attempting to forecast the entailment relationship between utterances and semantic labels. In contrast, DNNC reframes classification as entailment prediction between query and utterances in the training set.

    For training and benchmarking, the team used the CLINC150 Dataset. Because the proposed method is based on the entailment between query utterance and labels rather than all training samples, their findings imply that it is more efficient in training and serving than DNNC. The USLP method does not have this restriction, whereas the DNNC method calls for more than one example per intent.

    This method outperforms the conventional classification methodology in one-shot studies. Model performance often improves with longer, more semantically relevant labels; however, the improvement diminishes as more training data become accessible.

    Topics

    More Like This