TMCnet News

PKSHA develops advanced Large Language Models in collaboration with Microsoft Japan
[April 29, 2024]

PKSHA develops advanced Large Language Models in collaboration with Microsoft Japan


PKSHA Technology Inc. (TOKYO:3993) has developed one of the first Japanese-English Large Language Models (LLM) using Retentive Network (RetNet) (*1) in collaboration with Microsoft Japan Co., Ltd. Through this LLM development, PKSHA will further enhance the practicality of generative AI in the business world, primarily focusing on boosting productivity within contact centers and corporate help desks. Actual operation in business environments will begin in stages from April 2024.

Overview of PKSHA's LLM: First Japanese-English LLM using 'RetNet'

PKSHA has developed a new LLM with the following features, leveraging Azure's AI Infrastructure and technical assistance from Microsoft Japan.

This model is the world's first (*2) Japanese-English LLM using RetNet, which is anticipated to be a leading successor to the widely-used Transformer. RetNet, developed by Microsoft Research Asia, has a fast learning speed. It also excels in inference speed and memory efficiency when processing long text input, while maintaining or exceeding the accuracy of traditional models. The superior memory efficiency means that the model can run on fewer GPUs (*3) than conventional models, making it more cost effective. This architecture enables our Japanese-English LLM to combine efficiency in long text comprehension with excellent response speed.

This is a 7B parameter model, a size that balances output accuracy and operating cost for implementation in contact centers and corporate help desk operations.

For example, when inputting the text from two pages of a Japanese newspaper (*4), this model can output a response approximately three times faster than a conventional model, without compromising on accuracy. The model's efficiency improves proportionally with the volume of input information.

The LLM has been developed using DeepSpeed (*5), a deep learning framework developed by Microsoft Research. Microsoft provided the RetNet modeling expertise and Azure's purpose built AI Infrastructure virtual machines optimized for AI workloads, to take advantage of DeepSpeed's strength - highly parallel and distributed processing capabilities. With RetNet and DeepSpeed, we were able to efficiently train and achieve early performance validation with a prototype model.

The benefit of instant answers will transform contact center and orporate help desk operations



Founded in 2012, PKSHA has been focusing on the research and development of natural language processing (NLP) and the implementation of AI in society, mainly in the field of communication. With a strong track record of more than 6,000 AI implementations, mainly in areas of contact centers and corporate help desks, PKSHA will promote the use of this LLM to achieve further deployment in these areas. Our model is developed on the premise that it will be implemented in the business environment based on a practical understanding of the customer's challenges.

After further verification and refinement, the new LLM is planned to be gradually implemented in actual business environments later this spring. PKSHA regards this LLM as a powerful business asset, and aims to combine it with PKSHA's other technologies to create a prosperous society in which people's abilities are maximized through the collaboration of people and software.


Comment from Hiromichi Nozaki, Managing Executive Officer and Chief Technology Officer of Microsoft Japan Co., Ltd.

"Microsoft Japan warmly welcomes the development of a new large-scale language model by PKSHA Technology Inc., and its application to these important customer service experiences. In particular, by utilizing RetNet and DeepSpeed to achieve both Japanese and English LLMs, PKSHA can improve the immediacy and accuracy of communication," said Hiromichi Nozaki, Managing Executive Officer and Chief Technology Officer of Microsoft Japan Co., Ltd. "As we strive to accelerate digital transformation through technological innovation and contribute to the realization of a prosperous society, Microsoft Japan will continue to support PKSHA's vision of 'co-evolution of humans and software' and its innovative efforts to realize this vision."

*1: RetNet is a technology that is expected to be the successor to Transformer, as it is reported to be a model with the following features compared to Transformer, which is used in almost all LLMs as of March 2024.
- Language performance equivalent to or better than Transformer.
- Fast learning by parallel execution.
- Memory-saving and low-latency inference.
For more information, please see the following research paper:
https://www.microsoft.com/en-us/research/publication/retentive-network-a-successor-to-transformer-for-large-language-models/

*2: According to our own research on open-source models published on Hugging Face, a global platform providing machine learning models and other NLP-related tools and resources.

*3: Graphics Processing Unit. A special device that performs high-speed parallel processing and is widely used in fields such as graphics processing, scientific computing and machine learning.

*4: Assuming approximately 10,000 characters per newspaper page.

*5: DeepSpeed enables world's most powerful language models. It is an easy-to-use deep learning optimization software suite that powers unprecedented scale and speed for both training and inference.
- Reshape Large Model Training Landscape with scale, efficiency, ease
- Optimize Large Model Inference of scale, latency, cost
- Speed Up Inference & Reduce Model Size with compression for efficiency and cost savings.
- DeepSpeed4Science: Microsoft's AI to unlock science mysteries

For more information, please see the following link:
https://www.deepspeed.ai/


[ Back To TMCnet.com's Homepage ]