- PKSHA Technology collaborates with Microsoft Japan to develop a Japanese-English Large Language Model (LLM) using Retentive Network (RetNet).
- The initiative targets enhanced productivity in contact centers and corporate help desks.
- The LLM, built on Azure’s AI Infrastructure with support from Microsoft Japan, surpasses the Transformer model.
- Real-world deployment in business settings begins in stages from April 2024.
Main AI News:
In a strategic collaboration, PKSHA Technology joins forces with Microsoft Japan to spearhead the creation of a cutting-edge Japanese-English Large Language Model (LLM) leveraging the innovative Retentive Network (RetNet). This groundbreaking initiative is poised to revolutionize the landscape of generative AI applications within the corporate sphere, with a primary emphasis on augmenting efficiency across contact centers and corporate help desks.
Harnessing the robust capabilities of Azure’s AI Infrastructure and leveraging technical expertise from Microsoft Japan, PKSHA has meticulously crafted a next-generation LLM boasting a suite of unparalleled features. Positioned to supersede the conventional Transformer model, this pioneering Japanese-English LLM powered by RetNet represents a paradigm shift in language processing technology. Scheduled for phased deployment, real-world implementation across various business environments is slated to commence from April 2024.
Conclusion:
The partnership between PKSHA Technology and Microsoft Japan signifies a significant advancement in language processing technology, particularly within the corporate sector. By leveraging Retentive Network (RetNet) in developing a Japanese-English Large Language Model (LLM), the collaboration promises to enhance productivity in contact centers and corporate help desks. This innovative model, supported by Azure’s AI Infrastructure, is poised to surpass the capabilities of the conventional Transformer model. As real-world implementation begins in April 2024, businesses can anticipate a new era of efficiency and effectiveness in language-based tasks.