Skip to content
View joepareti54's full-sized avatar

Block or report joepareti54

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
joepareti54/README.md

program jp-llama-langchain-2.ipynb

This program demonstrates langchain and SimpleSequentialChain for a streamlined approach, where the output of generating a question directly feeds into generating an answer.

LangChain is a powerful framework that simplifies the construction of applications leveraging LLMs by abstracting the complexities involved in setting up, executing, and chaining calls to LLMs. The provided example shows how LangChain can be used to:

Generate a Question: Based on a user-specified topic, the program generates a thoughtful question. Generate an Answer: Following the question generation, the program then generates an answer to the posed question.

In the direct use of SimpleSequentialChain, the chain is treated as a black box that takes an input and provides an output with less visibility or control over the intermediate steps. It's efficient for straightforward sequential processes but less so when you need to interact with or alter the process between steps.

program jp-llama-langchain-3.ipynb

This program also uses langchain and manages each step explicitly without relying on SimpleSequentialChain for the entire process. You're able to clearly differentiate between the question and the answer, providing more tailored outputs and interactions.

program llama-simple.ipynb

This program demonstrates programmatic access to a llama model and highlights the correct way to set the authentication token and to pass on to the model.

program jp_test_llama_0.ipynb

This program demonstrates a way to leverage the Replicate API for custom text generation tasks using preferred LLM models. By encapsulating the details of API interaction and response handling within a Python class, it simplifies the process of integrating advanced language model capabilities into a variety of applications, ranging from content creation to AI-driven text analysis:

  • Initialization: The LLM class is initialized with a model_name and replicate_api_token
  • Text Generation: The generate method takes a text prompt as input and uses the Replicate platform to generate output based on this prompt
  • Output Collection: The method expects the model's response to be a generator, iterating over this generator to concatenate parts of the generated text into a single output string.
  • Error Handling
  • Example Usage

program jp_llama_langchain_rag_faiss_1.ipynb

For the knowledge base implementation refer to this Q&A : https://chat.openai.com/c/ec610024-ce65-456b-bb3b-7f24e78d9161

In this program, embeddings are defined and FAISS is used.

Customization

Knowledge Base: You can expand the knowledge_base dictionary with new entries as needed. Make sure to re-generate and re-index embeddings after updates. Embedding Model: The default model is all-MiniLM-L6-v2. You can change the model used for embedding generation to better suit your specific requirements or to experiment with performance differences.

Additional Notes

This program is designed for demonstration purposes and showcases how to integrate semantic search capabilities into Python applications. For production use, consider scalability aspects and ensure proper error handling and logging mechanisms are in place.

Implementation Overview

The core of this program's functionality revolves around two main enhancements: generating semantic embeddings for the knowledge base entries and implementing an efficient similarity search. Here's a brief overview of the key steps involved in the implementation:

Generating Embeddings:

The sentence-transformers library is utilized to convert textual content from the knowledge base into high-dimensional vector representations (embeddings). This process involves iterating over each entry in the knowledge_base dictionary and using the model (e.g., all-MiniLM-L6-v2) to generate embeddings that capture the semantic meaning of the text. These embeddings are stored in a NumPy array for efficient handling.

Indexing with FAISS:

Once embeddings are generated, they are indexed using FAISS (Facebook AI Similarity Search), a library specialized for fast similarity searching. This involves creating a FAISS index (e.g., IndexFlatL2) and adding the generated embeddings to this index. The index is then used to perform similarity searches: given a query, the system encodes it into an embedding using the same transformer model, and queries the FAISS index to find the most semantically similar knowledge base entry.

Retrieving Relevant Entries:

The retrieval process is triggered by user queries. Each query is transformed into an embedding and passed to the FAISS index, which returns the index(es) of the embedding(s) most similar to the query. These indices are used to fetch the corresponding knowledge base entries, which are then presented as the query's answer.

These enhancements significantly improve the system's ability to understand and match queries with relevant information based on semantic similarity, moving beyond simple keyword matching to a more nuanced understanding of content.

program jp_llama_langchain_rag_0.ipynb

this program is explained here https://www.linkedin.com/pulse/enhancing-ai-retrieval-augmented-generation-demo-system-joseph-pareti-t4fcf/?trackingId=MSd3TWXgR8iYZYh%2BUK8Ksg%3D%3D and is a preparation step for jp_llama_langchain_rag_faiss_1.ipynb : it uses a simplified knowledge base implemented as a python dictionary.

program gnn_1.ipynb

In GNNs, each node in the graph is typically associated with a feature vector. This vector can contain various types of information relevant to the nodes, such as physical properties, categorical data, or any other attributes relevant to the nodes' roles in the graph.

The dimension of the node features in this particular model is 1, meaning each node is represented by a single scalar value. This could represent a simple property or characteristic. For instance, in a molecular graph, it could indicate the type of atom (e.g., hydrogen or carbon coded as 0 or 1), or in a social network graph, it could indicate binary attributes such as gender.

First GCNConv Layer:

This layer takes the input node features and applies a graph convolution operation. The operation aggregates features from the neighbors of each node, effectively allowing each node to gather information from its immediate graph locality. The layer expands the feature dimension from 1 to 16. This means it transforms the simple scalar feature into a more complex feature vector of size 16, allowing for a richer representation of each node's context within the graph. After the first graph convolution, an activation function (ReLU in this case) is applied to introduce non-linearity, helping the model to learn complex patterns. Dropout is also applied as a form of regularization to prevent overfitting, especially when the dataset or the graph structure might not provide diverse enough samples for robust learning across all nodes. Another graph convolution layer further transforms the node features, typically aiming to produce outputs suitable for specific tasks such as node classification, where the final feature size might correspond to the number of classes.

Features in this GNN model are attributes or properties associated with each node in the graph. For your model, each node has a single feature with a dimension of 1. These features are input data that the model uses to learn about each node.

Purpose: The features are used to initialize the state of each node in the graph. They provide the raw input that the GNN uses to compute more complex representations through its layers, leveraging both the intrinsic data (the node features themselves) and the structural data (how nodes are connected).

If the graph represents a simple chemical structure, a feature could indicate a binary property, such as whether an atom is carbon (1) or not (0). These features are the basis for all further calculations and learning within the network.

test2.py

This is a toy program used to explain DIFFUSION in the context of Alpha Fold.

Original Data Initialization:

The script starts with an original data point set to 5.0. This is a simplified one-dimensional example, representing a single value from which we will generate noisy versions and then attempt to recover the original through a denoising process. Noise Addition Function (add_noise):

A function that takes a data point and a specified noise level, then returns the data with Gaussian noise added. This simulates the process of corrupting clean data with random variations, mimicking the initial stages of a diffusion model where data is progressively noised. Noise Predictor Class (NoisePredictor):

A simple placeholder class designed to model the ability to predict and estimate the level of noise in data. It includes: An initialization method that sets a starting guess for a noise factor. A train method that adjusts the noise factor based on the difference between noisy data and original data, simulating a learning process where the model adjusts its parameters based on observed data. A predict_noise method that calculates an estimated noise level for given data based on the learned noise factor. Training Phase (train_diffusion):

This function simulates the training of a noise prediction model over several steps. It starts by adding a large amount of noise to the original data and initializes the NoisePredictor. For each training step, it generates new noisy data with decreasing noise levels, trains the NoisePredictor to better estimate the noise based on the difference from the original data, and adjusts the model accordingly. This step simulates the typical training of a diffusion model where the model learns to predict earlier, less noisy states from more noisy ones. Inference Phase (inference_diffusion):

Using the trained NoisePredictor, this function attempts to recover the original data from a noisy state by iteratively predicting and subtracting the estimated noise. This simulates the denoising phase of a diffusion model, where the model applies learned patterns to iteratively refine a noisy input into a clear output. Execution Flow: The script begins by running the training phase, where it simulates the addition of noise to the original data and trains the NoisePredictor to estimate this noise. It then proceeds to the inference phase, starting from the noisy data produced at the end of the training phase, and uses the trained NoisePredictor to denoise the data, ideally recovering something close to the original data point.

Popular repositories Loading

  1. alphafold3-diffusion-demo alphafold3-diffusion-demo Public

    Python 4 2

  2. ML0 ML0 Public

    machine-learning repo

  3. image-classification image-classification Public

    train a NN using PyTorch for image classification

    Jupyter Notebook

  4. Non_supervised_learning Non_supervised_learning Public

    investigate clustering algorithms including k-means, etc

    Jupyter Notebook

  5. PCA PCA Public

    experimenting with PCA and random forest classifier on the MNIST dataset

    Jupyter Notebook

  6. CustomerSegments CustomerSegments Public

    Identify customer segments to increase sales. It is based on a demographic file containing ~900K records, and a ~200K customers’ records file.

    Jupyter Notebook