Adam & Eva Inc. Shatters AI Training Speed Record With Cutting-Edge Fully Parallel AI Architecture

  • Adam & Eva Inc. achieves a groundbreaking speed record in AI training with their latest fully parallel architecture.
  • The new neuro-symbolic AI, EVA, achieves speeds of 3Gb per hour in Large Language Model (LLM) training.
  • Unlike traditional LLM-based transformers, Adam & Eva’s AI implements a linear learning rate based on dataset size, significantly reducing computing power, time, and costs.
  • CEO Paul Kewene-Hite highlights EVA’s scalability and cost-saving benefits for clients, with options for self-training.
  • The impressive results were obtained on a 56-core CPU server and will be presented at the MadHat event in San Francisco.
  • Adam & Eva Inc. plans to release EVA through public cloud providers in 2024, further extending its accessibility and impact.

Main AI News:

Adam & Eva Inc. has achieved a groundbreaking milestone in AI advancement with the latest iteration of its neuro-symbolic architecture AI, a pioneering achievement in the field of Generative AI. Clocking in at an astonishing 3Gb per hour, this new version of Adam & Eva’s AI, dubbed EVA, has set a new standard in speed and efficiency for Large Language Model (LLM) training.

Diverging from traditional LLM-based transformers, Adam & Eva’s neuro-symbolic AI introduces a revolutionary approach by implementing a linear learning rate tailored to the dataset’s size. This innovation translates into a substantial reduction in computing power requirements, time investment, and overall costs, marking a significant leap forward in AI development.

Paul Kewene-Hite, CEO & Co-Founder of Adam & Eva Inc., expressed his enthusiasm about this remarkable achievement, stating, “We are thrilled to announce this breakthrough performance in training EVA, our game-changing LLM. Beyond its exceptional computational speeds, EVA boasts unparalleled scalability limited only by dataset size and available CPUs. Moreover, clients have the option to self-train our LLM EVA, leading to substantial savings in both capital and operational expenditures.”

The impressive 3Gb per hour training speed showcased by Adam & Eva’s latest fully parallel AI architecture was executed on a 56-core CPU server, hosted by Hyve Managed Hosting. Looking ahead, Adam & Eva plans to unveil their revolutionary LLM “EVA” during the upcoming MadHat event in San Francisco on April 19th, with plans to make it available through public cloud providers in 2024.

Established in 2023 as a Delaware corporation, Adam & Eva, Inc. has offices strategically located in San Diego (CA), Boston (MA), Palo Alto (CA), and Haifa, Israel, further underscoring its commitment to global innovation and collaboration (www.adam-eva.ai). Specializing in the development of traceable and scalable neuro-symbolic Generative AI architecture, Adam & Eva’s AI stands out for its linear computational complexity in both training and operational modes, setting a new benchmark in the industry.

Conclusion:

Adam & Eva Inc.’s achievement signifies a major advancement in AI training, offering unparalleled speed, efficiency, and scalability. With EVA’s linear learning rate and cost-saving features, the market can expect a shift towards more accessible and efficient AI solutions, empowering businesses to harness the power of AI with reduced barriers to entry. This breakthrough underscores Adam & Eva’s leadership in the industry and sets a new standard for AI development and deployment.

Source