Categories
AI

Groq LPU

Groq Architecture

Groq’s architecture, known as the Tensor Streaming Processor (TSP), is a unique approach to processing artificial intelligence (AI) workloads. It diverges significantly from conventional multi-core processor designs and is characterised by its deterministic nature and software-defined hardware strategy.

Key Features of Groq’s TSP Architecture

  • Deterministic Execution: The TSP architecture is designed to be deterministic, meaning that the execution of instructions is planned and scheduled statically by the compiler. This eliminates the need for speculative execution and branch prediction, which are common in traditional processor designs[1].
  • Software-Defined Hardware: Groq’s approach emphasizes the role of software in defining hardware behavior. The compiler plays a central role in orchestrating the execution of instructions, which allows for a more predictable and efficient use of the processor’s resources[1][2].
  • Simplified Control Logic: By transferring control to the software compiler, Groq’s TSP architecture removes unnecessary control logic from the hardware. This simplifies the hardware design and optimizes chip area allocation, leading to higher computing power per unit area[2].
  • Memory and Execution Unit Interactions: The TSP architecture features a spatial arrangement of functional units that pass operands and results to each other in a method called “chaining.” This allows for the output of one function unit to be directly connected to the input of an adjacent downstream function unit, enhancing the efficiency of operations[2].
  • High-Performance Memory: Instead of using DRAM or HBM, Groq’s TSP uses SRAM for high-performance storage, which maintains high memory bandwidth and facilitates easier cascaded expansion for larger cluster configurations[2].
  • Compute Density: Groq’s TSP boasts a computing density of more than 1 TeraOps per square millimeter, thanks to its data parallel processing capabilities and the large number of multipliers integrated into the chip[2].

Building Your Own TSP Architecture

Building your own version of Groq’s TSP architecture would be an incredibly complex task, as it involves designing a custom processor with a unique architecture, developing a specialized compiler, and creating the necessary software infrastructure to manage the hardware.

However, if you are interested in experimenting with TSP-like concepts or developing applications that could run on Groq’s hardware, you could focus on the following aspects:

  1. Understanding the Principles: Gain a deep understanding of the principles behind Groq’s TSP architecture, such as deterministic execution, software-defined hardware, and the chaining method of operation[1][2].
  2. Compiler Development: Develop skills in compiler design and learn how to create compilers that can statically schedule instructions and manage hardware resources efficiently[1].
  3. Hardware Design: Study hardware design and learn about the spatial arrangement of functional units, memory integration, and the elimination of unnecessary control logic[2].
  4. Software Simulation: Before attempting to build physical hardware, you could create a software simulation of the TSP architecture to test and validate your design concepts.
  5. Collaboration: Consider collaborating with experts in the field of processor design and AI hardware, as building a processor akin to Groq’s TSP would require a multidisciplinary team with expertise in various areas of computer engineering and software development.

It’s important to note that creating a processor like Groq’s TSP from scratch would be a monumental task that goes beyond the capabilities of most individuals or small teams. It would require significant resources, expertise, and time. Groq itself was founded by engineers with experience in Google’s TPU project and has raised substantial funding to develop its technology[1].

Regarding the term “LPU” mentioned in your question, it appears to be a misunderstanding. “LPU” in the context of the search results refers to Lovely Professional University, which is not related to Groq’s TSP architecture[3][4]. If you are referring to a “Logical Processing Unit” or another term that has been conflated with Groq’s technology, it’s important to clarify that Groq’s architecture is specifically called TSP, and the term “LPU” does not apply to it[1][2].

Citations:
[1] https://fuse.wikichip.org/news/3005/groq-tensor-streaming-processor-delivers-1-petaops-of-compute/
[2] https://groqai.com/BLOG/Groq-%20The-King-of-Performance-for-AI-Inference.html
[3] https://www.lpu.in/events/freshmeninduction/lpu-foundation.php
[4] https://www.lpu.in/academics/live-projects.php
[5] https://www.youtube.com/watch?v=pb0PYhLk9r8
[6] https://www.youtube.com/watch?v=WCYf2Agml-s
[7] https://stackoverflow.com/questions/75090024/regarding-oas-lpu-test-answer-find
[8] https://www.youtube.com/watch?v=dxh0LjhRHO0
[9] https://www.linkedin.com/pulse/grokq-ai-speed-thought-madhuri-mittal-5er6e
[10] https://www.geeksforgeeks.org/lovely-professional-university-lpu-campus-experience/
[11] https://www.linkedin.com/pulse/groqs-lightning-fast-20000-lpu-chip-revolutionizes-llm-bruno-bernard-7hbhf

The Groq Language Processing Unit (LPU) is designed to support various programming languages for its development and implementation. Based on the provided sources, the programming languages used to build the Groq LPU include:

  1. C++: Magnolia, a programming language designed as an embodiment of Stepanov-style generic programming, can target C++ for implementation alongside Python[1].
  2. Python: Magnolia, the language embodying generic programming, can also target Python for implementation[1].
  3. Host Language (C++): The hardware specialization in the Groq architecture, particularly in the Magnolia compiler, is implemented in the host language, which is C++ in this context. The code contributing to the production of an optimized output function is split between Magnolia and C++[2].
  4. Groq Compiler Language: The Groq compiler language is used to extend the Magnolia compiler with code generation and term rewriting facilities based on axioms, enabling hardware specialization and optimization[1].

These programming languages play a crucial role in the development, implementation, and optimization of the Groq LPU architecture, allowing for efficient and high-performance processing in AI applications.

Citations:
[1] https://www.frontiersin.org/articles/10.3389/fcomp.2022.931312/full
[2] https://dl.acm.org/doi/pdf/10.1145/3470496.3527405
[3] https://wow.groq.com/why-groq/
[4] https://dl.acm.org/doi/abs/10.1145/3470496.3527405
[5] https://www.sciencedirect.com/science/article/pii/S1383762122001138

Magnolia is a programming language designed as an embodiment of Stepanov-style generic programming[3]. It is structured into modules that mix abstract specifications of operations and their concrete implementations, following the work of Goguen and Burstall on the theory of institutions[3]. Magnolia can target both C++ and Python as the host language for its implementation[3].

In the context of Groq’s LPU, Magnolia is used to implement the hardware specialization in the Groq architecture, particularly in the Magnolia compiler[2]. The code contributing to the production of an optimized output function is split between Magnolia and C++[2]. This allows for efficient and high-performance processing in AI applications.

Citations:
[1] https://docs.magnolia-cms.com/product-docs/6.2/administration/language/
[2] https://docs.magnolia-cms.com/product-docs/6.2/administration/language/language-configuration/
[3] https://www.frontiersin.org/articles/10.3389/fcomp.2022.931312/full
[4] https://bejamas.io/discovery/headless-cms/magnoliacms/
[5] https://github.com/magnolia-cms
[6] https://www.magnoliasci.com/features/
[7] https://wow.groq.com/lpu-inference-engine/
[8] https://codeconfessions.substack.com/p/groq-lpu-design
[9] https://www.reddit.com/r/LocalLLaMA/comments/1bbfzvk/groq_lpu_and_its_future/
[10] https://wow.groq.com/why-groq/
[11] https://www.reddit.com/r/LocalLLaMA/comments/1b92nzs/groq_lpu_and_its_implications_on_the_future_of_ml/
[12] https://news.ycombinator.com/item?id=39448398
[13] https://www.youtube.com/watch?v=RSzG_v5XIxM
[14] https://www.youtube.com/watch?v=mk1M2Ctutxo
[15] https://sourceforge.net/projects/magnolia/
[16] https://www.linkedin.com/pulse/groq-pioneering-future-ai-language-processing-unit-lpu-gene-bernardin-oqose
[17] https://bora.uib.no/bora-xmlui/bitstream/handle/11250/3037210/fcomp-04-931312.pdf?isAllowed=y&sequence=1

Categories
API

Azure Studio Model API

import urllib.request
import json
import os
import ssl
from rich import print

# 1. Allow self-signed certificate
def allowSelfSignedHttps(allowed):
    if allowed and not os.environ.get('PYTHONHTTPSVERIFY', '') and getattr(ssl, '_create_unverified_context', None):
        ssl._create_default_https_context = ssl._create_unverified_context

allowSelfSignedHttps(True)

# 2. Ask Question
data = {
  "input_data": {
    "input_string": [
      {
        "role": "user",
        "content": "I am going to London, what should I see?"
      }
    ],
    "parameters": {
      "temperature": 0.7,
      "top_p": 0.9,
      "do_sample": True,
      "max_new_tokens": 200
    }
  }
}

# 3. Call the Model
body = str.encode(json.dumps(data))
url = 'https://contact-6352-wggzm.uksouth.inference.ml.azure.com/score'
api_key = os.getenv('AZURE_API_KEY')
if not api_key:
    raise Exception("A key should be provided to invoke the endpoint")

headers = {'Content-Type':'application/json', 'Authorization':('Bearer '+ api_key), 'azureml-model-deployment': 'phi-3-mini-4k-instruct-2' }
req = urllib.request.Request(url, body, headers)

try:
    response = urllib.request.urlopen(req)
    result = response.read()
    result_json = json.loads(result.decode('utf-8'))
    print(result_json['output'])
except urllib.error.HTTPError as error:
    print("The request failed with status code: " + str(error.code))
    print(error.info())
    print(error.read().decode("utf8", 'ignore'))
Categories
LLM

Phi 3 Small Language Model (SLM)

Introduction and Key Features:

  • phi-3-mini: 3.8B model rivaling GPT-3.5
  • Can run phi-3-mini locally on phone
  • Trained on filtered web + synthetic data

Model Performance:

  • phi-3-mini outperforms on academic reasoning benchmarks
  • Evaluated on Microsoft internal safety benchmarks

Safety Considerations:

  • phi-3-mini safety aligned through post-training
  • Limited factual knowledge storage in phi-3-mini
  • Challenges remain on factual accuracy, biases

Larger Model Versions:

  • Larger phi-3-small (7B), phi-3-medium (14B) models

phi-3-mini:

  • Transformer decoder architecture similar to LLama-2
  • 3072 hidden dimension, 32 heads, 32 layers
  • 4K default context length
  • Long context 128K version (phi-3-mini-128K) using LongRope

phi-3-small (7B):

  • 32 layers, 4096 hidden size
  • Uses grouped-query attention (4 queries share 1 key)
  • Mix of dense and novel blocksparse attention layers
  • 8K default context length
  • Different tokenizer (tiktoken) with 100,352 vocab size

phi-3-medium (14B):

  • 40 layers, 5120 embedding dimension, 40 heads
  • Same tokenizer and context length as phi-3-mini
  • Trained for more epochs (4.8T tokens) on same data as phi-3-mini
Categories
AutoGen

AutoGen Llama 3 Text to SQL Evaluation

pip install pyautogen spider-env

Using Groq

export GROQ_API_KEY=xxxxxxxxxxxx
# 1. Configuration
import json
import os
from typing import Annotated, Dict
from spider_env import SpiderEnv
from autogen import ConversableAgent, UserProxyAgent, config_list_from_json

os.environ["AUTOGEN_USE_DOCKER"] = "False"
llm_config = {
    "cache_seed": 48,
    "config_list": [{
        "model": os.environ.get("OPENAI_MODEL_NAME", "llama3-70b-8192"), 
        "api_key": os.environ["GROQ_API_KEY"], 
        "base_url": os.environ.get("OPENAI_API_BASE", "https://api.groq.com/openai/v1")}
    ],
}

# 2. Import Data
gym = SpiderEnv()
observation, info = gym.reset()
question = observation["instruction"]
print(question)
schema = info["schema"]
print(schema)

# 3. Create Agents
def check_termination(msg: Dict):
    if "tool_responses" not in msg:
        return False
    json_str = msg["tool_responses"][0]["content"]
    obj = json.loads(json_str)
    return "error" not in obj or obj["error"] is None and obj["reward"] == 1

sql_writer = ConversableAgent(
    "sql_writer",
    llm_config=llm_config,
    system_message="You are good at writing SQL queries. Always respond with a function call to execute_sql().",
    is_termination_msg=check_termination,
)

user_proxy = UserProxyAgent(
    "user_proxy", 
    human_input_mode="NEVER", 
    max_consecutive_auto_reply=5
)

# 4. Create Tools / Function Calling
@sql_writer.register_for_llm(description="Function for executing SQL query and returning a response")
@user_proxy.register_for_execution()
def execute_sql(reflection: Annotated[str, "Think about what to do"], sql: Annotated[str, "SQL query"]) -> Annotated[Dict[str, str], "Dictionary with keys 'result' and 'error'"]:
    observation, reward, _, _, info = gym.step(sql)
    error = observation["feedback"]["error"]
    if not error and reward == 0:
        error = "The SQL query returned an incorrect result"
    if error:
        return { "error": error, "wrong_result": observation["feedback"]["result"], "correct_result": info["gold_result"], }
    else:
        return { "result": observation["feedback"]["result"], }


# 5. Initiate Chat
prompt_template = f"""Below is the schema for a SQL database:
{schema}
Generate a SQL query to answer the following question:
{question}
"""

user_proxy.initiate_chat(sql_writer, message=prompt_template)

Using Ollama

export OPENAI_API_BASE=http://localhost:11434/v1
export OPENAI_MODEL_NAME=llama3
Categories
AutoGen

AutoGen AI Research Agents

import autogen
import os

llm_config = {
    "cache_seed": 47,
    "temperature": 0,
    "config_list": [{"model": os.environ.get("OPENAI_MODEL_NAME", "gpt-4-turbo"), "api_key": os.environ["OPENAI_API_KEY"], "base_url": os.environ.get("OPENAI_API_BASE", "https://api.openai.com/v1")}],
    "timeout": 120,
}

user_proxy = autogen.UserProxyAgent(
    name="Admin",
    system_message="A human admin. Interact with the planner to discuss the plan. Plan execution needs to be approved by this admin.",
    code_execution_config=False,
)

engineer = autogen.AssistantAgent(
    name="Engineer",
    llm_config=llm_config,
    system_message="""Engineer. You follow an approved plan. You write python/shell code to solve tasks. Wrap the code in a code block that specifies the script type. The user can't modify your code. So do not suggest incomplete code which requires others to modify. Don't use a code block if it's not intended to be executed by the executor.
Don't include multiple code blocks in one response. Do not ask others to copy and paste the result. Check the execution result returned by the executor.
If the result indicates there is an error, fix the error and output the code again. Suggest the full code instead of partial code or code changes. If the error can't be fixed or if the task is not solved even after the code is executed successfully, analyze the problem, revisit your assumption, collect additional info you need, and think of a different approach to try.
""",
)

scientist = autogen.AssistantAgent(
    name="Scientist",
    llm_config=llm_config,
    system_message="""Scientist. You follow an approved plan. You are able to categorize papers after seeing their abstracts printed. You don't write code.""",
)

planner = autogen.AssistantAgent(
    name="Planner",
    system_message="""Planner. Suggest a plan. Revise the plan based on feedback from admin and critic, until admin approval.
The plan may involve an engineer who can write code. The executed by executor. 
Explain the plan first. Be clear which step is performed by an engineer, and which step is performed by a executor.
""",
    llm_config=llm_config,
)

executor = autogen.UserProxyAgent(
    name="Executor",
    system_message="Executor. Execute the code written by the engineer. If it fails try again with the fix. Finally report the result.",
    human_input_mode="NEVER",
    code_execution_config={
        "last_n_messages": 3,
        "work_dir": "paper",
        "use_docker": False,
    },  
)

critic = autogen.AssistantAgent(
    name="Critic",
    system_message="Critic. Double check plan, claims, code from other agents and provide feedback. Check whether the plan includes adding verifiable info such as source URL.",
    llm_config=llm_config,
)

groupchat = autogen.GroupChat(
    agents=[user_proxy, engineer, planner, scientist, executor, critic], messages=[], max_round=50
)
manager = autogen.GroupChatManager(groupchat=groupchat, llm_config=llm_config)

response = user_proxy.initiate_chat(
    manager,
    message="""
find papers on LLM applications from arxiv in the last week, create a markdown table of different domains. Generate code and make the executor run it to find the information.
""",
)
print("Response:")
print(response)
summary = response.summary if hasattr(response, 'summary') else "No summary available."
print("Summary:")
print(summary)
Categories
LLM

Best NVIDIA GPUs for LLMs

Here is a strong case for investing in NVIDIA GPUs, along with a list of recommended devices:

  1. For training larger LLMs like LLaMA 65B or Bloom, a multi-GPU setup with each GPU having at least 40GB of VRAM is recommended, such as NVIDIA’s A100 or the new H100 GPUs.
  2. For inference tasks with larger models, GPUs with substantial VRAM, like the NVIDIA RTX 6000 Ada (48GB VRAM) or RTX 4090 (24GB VRAM), are ideal choices, ensuring smooth and efficient performance.
  3. Smaller and more efficient LLMs, such as Alpaca, BERT, and some variants of Falcon and Zephyr, can be run on GPUs with 8GB to 16GB of VRAM, making them accessible on devices like the NVIDIA RTX 3080 or RTX 4080.

Here is a list of recommended NVIDIA GPUs to consider for testing and running LLMs locally:

Here is the table with the recommended LLM names for each GPU category:

PurposeModelVRAMRecommended LLMs
For Training Large LLMsNVIDIA A10040GB/80GBLLaMA 65B, Bloom
NVIDIA H10080GBLLaMA 65B, Bloom
For Inference with Large LLMsNVIDIA RTX 6000 Ada48GBBloom
NVIDIA RTX 409024GBLLaMA 7B, Falcon, Zephyr
For Training and Inference with Smaller/Medium LLMsNVIDIA RTX 408016GBMistral 7B, MPT 7B, Alpaca, BERT
NVIDIA RTX 309024GBFalcon, Phi 2, Zephyr
NVIDIA RTX 308010GBAlpaca, BERT
Categories
Script

Removing Comments in VSCode Instantly

For a single file in VSCode:

  1. Open the file.
  2. Press Ctrl+F to open the Find widget.
  3. Click the .* icon to enable regex search.
  4. In the Find field, enter ^#.*\n?.
  5. Press Ctrl+Enter to select all occurrences.
  6. Press Delete to remove all selected comment lines.

For all files in the folder

  1. Open the command palette with Ctrl+Shift+P.
  2. Type “Replace” and select “Replace in Files”.
  3. In the “Find” field, enter ^#.*\n? and leave the “Replace” field empty.
  4. Click “Replace All” to remove all comment lines.
Categories
AutoGen

AutoGen Text to SQL Evaluation

pip install autogen spider-env
import json
import os
from typing import Annotated, Dict

from spider_env import SpiderEnv

from autogen import ConversableAgent, UserProxyAgent, config_list_from_json

gym = SpiderEnv()

observation, info = gym.reset()

question = observation["instruction"]
print(question)

schema = info["schema"]
print(schema)


os.environ["AUTOGEN_USE_DOCKER"] = "False"
config_list = config_list_from_json(env_or_file="OAI_CONFIG_LIST")


def check_termination(msg: Dict):
    if "tool_responses" not in msg:
        return False
    json_str = msg["tool_responses"][0]["content"]
    obj = json.loads(json_str)
    return "error" not in obj or obj["error"] is None and obj["reward"] == 1


sql_writer = ConversableAgent(
    "sql_writer",
    llm_config={"config_list": config_list},
    system_message="You are good at writing SQL queries. Always respond with a function call to execute_sql().",
    is_termination_msg=check_termination,
)
user_proxy = UserProxyAgent("user_proxy", human_input_mode="NEVER", max_consecutive_auto_reply=5)


@sql_writer.register_for_llm(description="Function for executing SQL query and returning a response")
@user_proxy.register_for_execution()
def execute_sql(
    reflection: Annotated[str, "Think about what to do"], sql: Annotated[str, "SQL query"]
) -> Annotated[Dict[str, str], "Dictionary with keys 'result' and 'error'"]:
    observation, reward, _, _, info = gym.step(sql)
    error = observation["feedback"]["error"]
    if not error and reward == 0:
        error = "The SQL query returned an incorrect result"
    if error:
        return {
            "error": error,
            "wrong_result": observation["feedback"]["result"],
            "correct_result": info["gold_result"],
        }
    else:
        return {
            "result": observation["feedback"]["result"],
        }


message = f"""Below is the schema for a SQL database:
{schema}
Generate a SQL query to answer the following question:
{question}
"""

user_proxy.initiate_chat(sql_writer, message=message)

OAI_CONFIG_LIST

[
    {
        "model": "gpt-3.5-turbo",
        "api_key": "sk-xxxxxxxxxxxxxxxx"
    }
]
Categories
RAG

Llama3 PDF RAG using PhiData

Groq PDF RAG using PhiData

~/phidata/cookbook/llms/groq/rag

1. Setup Environment and Install packages

git clone https://github.com/phidatahq/phidata
cd phidata/cookbook/llms/groq/rag
conda create -n phidata python=3.11 -y
conda activate phidata
pip install -r requirements.txt
export GROQ_API_KEY=xxxxxxxxxx

2. Start Database

Download: Docker Desktop https://www.docker.com/products/docker-desktop/

Test if Docker got downloaded

docker ps
docker run -d \
  -e POSTGRES_DB=ai \
  -e POSTGRES_USER=ai \
  -e POSTGRES_PASSWORD=ai \
  -e PGDATA=/var/lib/postgresql/data/pgdata \
  -v pgvolume:/var/lib/postgresql/data \
  -p 5532:5432 \
  --name pgvector \
  phidata/pgvector:16

Test if the got database started

docker ps

3. Start Application

streamlit run app.py

Ollama PDF RAG using Phi Data

~/phidata/cookbook/llms/ollama/rag
cd phidata/cookbook/llms/ollama/rag
pip install -r requirements.txt
ollama pull llama3
streamlit run app.py
Categories
AI Agents

CrewAI Groq Llama 3: Sports News Agency

Install

conda create -n crewai python=3.11 -y
conda activate crewai
pip install 'crewai[tools]' flask requests

Create DB Code (Optional)

Optional, if you already have a DB with data.

import sqlite3

def create_db():
    conn = sqlite3.connect('nba_games.db')
    c = conn.cursor()
    c.execute('''
        CREATE TABLE IF NOT EXISTS games (
            team_name TEXT,
            game_id TEXT,
            status TEXT,
            home_team TEXT,
            home_team_score INTEGER,
            away_team TEXT,
            away_team_score INTEGER
        )
    ''')
    games = [
        ("warriors", "401585601", "Final", "Los Angeles Lakers", 121, "Golden State Warriors", 128),
        ("lakers", "401585601", "Final", "Los Angeles Lakers", 121, "Golden State Warriors", 128),
        ("nuggets", "401585577", "Final", "Miami Heat", 88, "Denver Nuggets", 100),
        ("heat", "401585577", "Final", "Miami Heat", 88, "Denver Nuggets", 100)
    ]
    c.executemany('INSERT INTO games (team_name, game_id, status, home_team, home_team_score, away_team, away_team_score) VALUES (?, ?, ?, ?, ?, ?, ?)', games)
    conn.commit()
    conn.close()

create_db()

API Code

from flask import Flask, request, jsonify
import sqlite3

app = Flask(__name__)

def get_game_score(team_name):
    conn = sqlite3.connect('nba_games.db')
    c = conn.cursor()
    team_name = team_name.lower()
    c.execute('SELECT * FROM games WHERE team_name = ?', (team_name,))
    result = c.fetchone()
    conn.close()
    if result:
        keys = ["game_id", "status", "home_team", "home_team_score", "away_team", "away_team_score"]
        return dict(zip(keys, result[1:]))
    else:
        return {"team_name": team_name, "score": "unknown"}

@app.route('/')
def home():
    return jsonify({
        'message': 'Welcome to the NBA Scores API. Use /score?team=<team_name> to fetch game scores.'
    })

@app.route('/score', methods=['GET'])
def score():
    team_name = request.args.get('team', '')
    if not team_name:
        return jsonify({'error': 'Missing team name'}), 400
    score = get_game_score(team_name)
    return jsonify(score)

if __name__ == '__main__':
    app.run(debug=True)

CrewAI Code

import os
import json
import requests
from crewai import Agent, Task, Crew
from crewai_tools import BaseTool

# 1. Create Custom Tool to Get Game Score from API
from crewai_tools import tool
@tool("Game Score Tool")
def game_score_tool(team_name: str) -> str:
    """Get the current score for a given NBA game by querying the Flask API. It accepts team_name"""
    url = f'http://127.0.0.1:5000/score?team={team_name}'
    response = requests.get(url)
    if response.status_code == 200:
        return json.dumps(response.json(), indent=2)
    else:
        return json.dumps({"error": "API request failed", "status_code": response.status_code}, indent=2)

# 2. Create Agents
researcher = Agent(
    role='Researcher',
    goal='Gather and analyze information on NBA game scores',
    verbose=True,
    backstory=(
        "As a seasoned researcher, you have a keen eye for detail and a "
        "deep understanding of sports analytics. You're adept at sifting through "
        "scores to find the most relevant and accurate data."
    ),
    tools=[game_score_tool],
    allow_delegation=False
)

writer = Agent(
    role='Sports Journalist',
    goal='Compose an engaging news article based on NBA game scores',
    verbose=True,
    backstory=(
        "With a talent for storytelling, you convert statistical data and game outcomes "
        "into engaging sports narratives. Your articles are insightful, capturing the excitement "
        "of the games and providing a deep analysis for sports enthusiasts."
    ),
    allow_delegation=False
)

# 3. Define Tasks
research_task = Task(
    description="Investigate the scores for the Warriors game.",
    expected_output='A detailed report summarizing the data.',
    tools=[game_score_tool],
    agent=researcher,
)

writing_task = Task(
    description="Write a detailed news article about an NBA game, focusing stats.",
    expected_output='An engaging and informative article suitable for publication in sports media.',
    context=[research_task],
    agent=writer,
)

# 4. Run the Crew
crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task]
)

result = crew.kickoff()
print(result)