[Change Log E2] Andromeda, The Cortex, Kosmos-X, The Harvestor, and more! May 5th - 20

date
May 8, 2023
slug
change-log-may5-may20
status
Published
tags
Changelog
summary
Andromeda, The Cortex, Kosmos-X
type
Post
This is our change log, where we document everything that we do, there will be 2 sections, 1 is for research and 1 is for our services such as Athena

Research

notion image
 

Andromeda:

An all-new SOTA Language model that can process ultra long sequences with ultra speed.

Changes:

  • Integration with XPOS
  • Integration with AliBI bias
Here's a summary of the improvements made to the ConditionalRoutedAttention class, along with mini examples and the benefits of each optimization/change:
  1. Integration of Relative Position Bias
Before: The original ConditionalRoutedAttention class did not include relative position bias in the attention computations.
After: The updated ConditionalRoutedAttention class now includes an optional relative position bias for both light_attn and heavy_attn. This bias is added to the attention logits before applying the softmax function.
if self.use_relative_position_bias:
    light_out += self.relative_position_bias(batch, qlen, klen).squeeze(0)
Benefit: Incorporating relative position bias allows the model to be more aware of the positional relationships between tokens, potentially leading to better performance in tasks that rely heavily on the order of input tokens.
  1. Handling of qlen and klen
Before: The variables qlen and klen were not defined in the original code.
After: We now define qlen and klen based on the input tensor's shape.
qlen, klen = x.shape[1], x.shape[1]
Benefit: Defining qlen and klen ensures that the relative position bias is computed with the correct dimensions for the input sequences.
Overall, the integration of relative position bias and proper handling of qlen and klen are aimed at improving the performance of the ConditionalRoutedAttention class by enabling the model to capture positional information more effectively.

Andromeda Roadmap:

  • Test new attention with xpos and alibi
  • Create modular training strategy
  • train on books 3
 


The Cortex,

notion image
A mesh of standard operating procedures to help streamline and democratize State of The Art AI Research

Changes:

Here is a summarized list of improvements made for a research log, along with mini examples and the benefits of each optimization/change:
  1. Improvement: Streamlined the literature review process.
Before: Conducting literature reviews manually, with no specific guidelines or tools. After: Using systematic search strategies, citation management tools, and note-taking methods. Benefit: Enhanced efficiency and organization, enabling researchers to quickly identify relevant studies and build on existing knowledge.
  1. Improvement: Established a collaborative workflow.
Before: Researchers working individually with limited interaction. After: Implementing a collaboration platform (e.g., Jupyter Notebook, Google Colab) and defining clear roles and responsibilities. Benefit: Improved teamwork, idea exchange, and research efficiency.
  1. Improvement: Created a shared repository for research materials.
Before: Disorganized storage of code, data, and documents across multiple devices and platforms. After: Using a centralized repository (e.g., GitHub, GitLab) to store all research-related materials. Benefit: Easier access, version control, and collaboration for all team members.
  1. Improvement: Developed a detailed research plan with milestones and a timeline.
Before: Starting research without clear goals, hypotheses, or a timeline. After: Formulating research questions, designing experiments, and establishing a project timeline. Benefit: Better project management, focus, and progress tracking.
  1. Improvement: Implemented regular code reviews and debugging sessions.
Before: Individual code development with limited oversight or quality control. After: Conducting regular code reviews and debugging sessions to ensure code quality and functionality. Benefit: Improved code quality, reduced errors, and facilitated knowledge sharing among team members.
  1. Improvement: Optimized model training on cloud instances.
Before: Running model training on local machines or suboptimal cloud configurations. After: Configuring AWS EC2 instances with appropriate hardware and software for efficient model training. Benefit: Reduced training time, computational costs, and resource usage.
  1. Improvement: Iterative research process with continuous evaluation and refinement.
Before: Conducting research without regular evaluation or iteration. After: Regularly evaluating model performance, comparing results with state-of-the-art methods, and refining hypotheses, experiments, and models. Benefit: Increased research significance, better alignment with the AI community's needs, and accelerated progress.
  1. Improvement: Enhanced documentation and sharing of research results.
Before: Limited documentation and sharing of research findings and code. After: Preparing comprehensive research papers, presenting at conferences or journals, and contributing to open-source repositories. Benefit: Greater visibility, feedback, and collaboration opportunities within the AI community.

Roadmap:

  • Create detailed tutorials for each epoch like how to find good ideas to experiment on
  • Create needed tools for each epoch

The Harvestor:

notion image
Simple Modules to extract high quality multi-modality data for pretraining from the web.
 

Changes:

  1. YouTube API: Interacting with the YouTube API allows the Harvestor to obtain video metadata, descriptions, and comments. This provides a richer dataset for training multi-modal AI models.
    1. Before: No YouTube API interaction. After: Added YouTube API interaction to collect metadata, descriptions, and comments. Benefit: More comprehensive dataset for training AI models.
  1. Speech-to-Text: Converting the extracted audio to transcripts using a ultra-fast whisperx.
    1. Before: No transcript generation. After: Added speech-to-text functionality to generate transcripts. Benefit: AI models can learn from spoken content, improving their understanding of natural language.
  1. Structured Data Storage: Storing the collected data in a structured format, such as JSON, allows for easier analysis and processing when training AI models.
    1. Before: No structured data storage. After: Added JSON data storage for structured data. Benefit: Easier analysis and processing of collected data.

Roadmap:

  • Add in ability to create spiders that crawl all subdomains and pages on a domain like kye.medium.com or agora.readthedocs.org
  • Integrate self-organizing agents that transform text into structured dataset through
  • Web app, simple text to ready to use dataset
 



Liquid 💧

notion image
Transform vanilla transformers into Liquid Transformers, transformers that adapt like water!
Here's a summarized list of the bugs, root cause analysis, fixes, and improvements made during the development of the Liquid Transformer API:
  1. Bug: RuntimeError when using the LiquidLayer class.
    1. Root Cause: The inputs tensor in the forward method of the LiquidLayer class was being converted to the wrong data type (torch.int64).
      Fix: Convert the hidden_states tensor to the same data type as inputs in the fused_step method.
      Before:
      inputs = self.word_embeddings(input_ids).to(torch.int64)
      
      After:
      hidden_states = hidden_states.to(inputs.dtype)
      
      Benefit: The fix resolves the RuntimeError and allows the LiquidLayer to process input data correctly.
  1. Bug: TypeError when decoding the output in the generate_text() function.
    1. Root Cause: The decode() function expected a list of integers, but it received a tensor.
      Fix: Use the generate() method from the Hugging Face Transformers library to generate text directly from the Liquid Transformer model.
      Before:
      decoded_output = tokenizer.decode(outputs[0][0].tolist(), skip_special_tokens=True)
      
      After:
      outputs = model.generate(input_ids)
      decoded_output = tokenizer.decode(outputs[0], skip_special_tokens=True)
      
      Benefit: The fix resolves the TypeError and allows the generate_text() function to generate text correctly using the Liquid Transformer model.
  1. Improvement: Inherit the LiquidTransformer class from the base model class instead of nn.Module.
    1. Before:
      class LiquidTransformer(nn.Module):
      
      After:
      class LiquidTransformer(AutoModel):
      
      Benefit: By inheriting from the base model class, the LiquidTransformer class gains access to all the methods and attributes of the base model, including the generate() method. This makes the Liquid Transformer API more compatible with the Hugging Face Transformers library and allows users to leverage the full functionality of the base model


      Liquid Attention
    2. LeakyIntegrator instantiation:
      1. Before:
        pythonCopy code
        self.self_attn = LeakyIntegrator(d_model, nhead)
        
        
        After:
        pythonCopy code
        self.self_attn = LeakyIntegrator(d_model, nhead, decay=0.1)
        
        
        Benefit: The LeakyIntegrator class was not being instantiated correctly. The change allows the LeakyIntegrator class to be properly instantiated with the correct parameters.
    3. Pass 'attn_mask' parameter in LeakyIntegrator:
      1. Before:
        pythonCopy code
        def forward(self, query, key, value, key_padding_mask=None):
        
        
        After:
        pythonCopy code
        def forward(self, query, key, value, key_padding_mask=None, attn_mask=None):
        
        
        Benefit: The 'attn_mask' parameter was missing from the forward method of the LeakyIntegrator class, leading to a TypeError. Adding the parameter allows the forward method to be called without errors.
    4. Pass 'need_weights' parameter in LeakyIntegrator:
      1. Before:
        pythonCopy code
        def forward(self, query, key, value, key_padding_mask=None, attn_mask=None):
        
        
        After:
        pythonCopy code
        def forward(self, query, key, value, key_padding_mask=None, attn_mask=None, need_weights=True):
        
        
        Benefit: The 'need_weights' parameter was missing from the forward method of the LeakyIntegrator class, leading to another TypeError. Adding the parameter allows the forward method to be called without errors and provides the option to return attention weights if needed.
       
       
These changes and improvements ensure that the Liquid Transformer API is easy to use, compatible with the Hugging Face Transformers library, and capable of generating text without errors.
 

Starlight Vision

notion image
  • Unet
  • MiDAS
  • clip
  • Spatial transformer
  • Starlight model


 

Ocean 🌊:

Ultra fast Multi-Modality Vector Database
notion image
 
 
  • Integrate ImageBind as a custom embedding function:
collection = client.create_collection("all-my-documents", embedding_function=text_embedding_function)
 
 


 

[Services] Athena


© APAC AI 2022 - 2024