Categories
DevOps

GCP Workload Identity

Required IAM Roles

  • roles/container.admin
  • roles/iam.serviceAccountAdmin

Enable Workload Identity on New Cluster

gcloud container clusters create <CLUSTER_NAME> \
    --region=<COMPUTE_REGION> \
    --workload-pool=<PROJECT_ID>.svc.id.goog

Update Existing Cluster

gcloud container clusters update <CLUSTER_NAME> \
    --region=<COMPUTE_REGION> \
    --workload-pool=<PROJECT_ID>.svc.id.goog

Create New Node Pool

gcloud container node-pools create <NODEPOOL_NAME> \
    --cluster=<CLUSTER_NAME> \
    --region=<COMPUTE_REGION> \
    --workload-metadata=GKE_METADATA

Update Existing Node Pool

gcloud container node-pools update <NODEPOOL_NAME> \
    --cluster=<CLUSTER_NAME> \
    --region=<COMPUTE_REGION> \
    --workload-metadata=GKE_METADATA

Get Cluster Credentials

gcloud container clusters get-credentials <CLUSTER_NAME> \
    --region=<COMPUTE_REGION>

Create Kubernetes Namespace

kubectl create namespace <NAMESPACE>

Create Kubernetes Service Account

kubectl create serviceaccount <KSA_NAME> \
    --namespace=<NAMESPACE>

Create IAM Service Account

gcloud iam service-accounts create <GSA_NAME> \
    --project=<GSA_PROJECT>

Add IAM Policy Binding

gcloud projects add-iam-policy-binding <GSA_PROJECT> \
    --member "serviceAccount:<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com" \
    --role "<ROLE_NAME>"

Allow KSA to Impersonate GSA

gcloud iam service-accounts add-iam-policy-binding <GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com \
    --role roles/iam.workloadIdentityUser \
    --member "serviceAccount:<PROJECT_ID>.svc.id.goog[<NAMESPACE>/<KSA_NAME>]"

Annotate KSA

kubectl annotate serviceaccount <KSA_NAME> \
    --namespace=<NAMESPACE> \
    iam.gke.io/gcp-service-account=<GSA_NAME>@<GSA_PROJECT>.iam.gserviceaccount.com

Apply Deployment

kubectl apply -f <DEPLOYMENT_FILE>

Combined Code Block

# Replace placeholders
CLUSTER_NAME=<your_cluster_name>
COMPUTE_REGION=<your_compute_region>
PROJECT_ID=<your_project_id>
NODEPOOL_NAME=<your_nodepool_name>
NAMESPACE=<your_namespace>
KSA_NAME=<your_ksa_name>
GSA_NAME=<your_gsa_name>
GSA_PROJECT=<your_gsa_project>
ROLE_NAME=<your_role_name>
DEPLOYMENT_FILE=<your_deployment_file>

# Commands
gcloud container clusters create $CLUSTER_NAME --region=$COMPUTE_REGION --workload-pool=$PROJECT_ID.svc.id.goog
gcloud container clusters update $CLUSTER_NAME --region=$COMPUTE_REGION --workload-pool=$PROJECT_ID.svc.id.goog
gcloud container node-pools create $NODEPOOL_NAME --cluster=$CLUSTER_NAME --region=$COMPUTE_REGION --workload-metadata=GKE_METADATA
gcloud container node-pools update $NODEPOOL_NAME --cluster=$CLUSTER_NAME --region=$COMPUTE_REGION --workload-metadata=GKE_METADATA
gcloud container clusters get-credentials $CLUSTER_NAME --region=$COMPUTE_REGION
kubectl create namespace $NAMESPACE
kubectl create serviceaccount $KSA_NAME --namespace=$NAMESPACE
gcloud iam service-accounts create $GSA_NAME --project=$GSA_PROJECT
gcloud projects add-iam-policy-binding $GSA_PROJECT --member "serviceAccount:$GSA_NAME@$GSA_PROJECT.iam.gserviceaccount.com" --role "$ROLE_NAME"
gcloud iam service-accounts add-iam-policy-binding $GSA_NAME@$GSA_PROJECT.iam.gserviceaccount.com --role roles/iam.workloadIdentityUser --member "serviceAccount:$PROJECT_ID.svc.id.goog[$NAMESPACE/$KSA_NAME]"
kubectl annotate serviceaccount $KSA_NAME --namespace=$NAMESPACE iam.gke.io/gcp-service-account=$GSA_NAME@$GSA_PROJECT.iam.gserviceaccount.com
kubectl apply -f $DEPLOYMENT_FILE

To find the value of placeholders, you can use commands like gcloud config list for <PROJECT_ID>, gcloud compute regions list for <COMPUTE_REGION>, etc. Assign these to the respective variables in the code.

Verify Workload Identity Setup

Create Pod Configuration File (wi-test.yaml)

apiVersion: v1
kind: Pod
metadata:
  name: workload-identity-test
  namespace: <NAMESPACE>
spec:
  containers:
  - image: google/cloud-sdk:slim
    name: workload-identity-test
    command: ["sleep","infinity"]
  serviceAccountName: <KSA_NAME>
  nodeSelector:
    iam.gke.io/gke-metadata-server-enabled: "true"

Create Pod

kubectl apply -f wi-test.yaml

Open Interactive Session

kubectl exec -it workload-identity-test \
  --namespace=<NAMESPACE> \
  -- /bin/bash

Verify Service Account Inside Pod

curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email

Combined Code Block

# Replace placeholders
NAMESPACE=<your_namespace>
KSA_NAME=<your_ksa_name>

# Create Pod Configuration File
echo "apiVersion: v1
kind: Pod
metadata:
  name: workload-identity-test
  namespace: $NAMESPACE
spec:
  containers:
  - image: google/cloud-sdk:slim
    name: workload-identity-test
    command: [\"sleep\",\"infinity\"]
  serviceAccountName: $KSA_NAME
  nodeSelector:
    iam.gke.io/gke-metadata-server-enabled: \"true\"" > wi-test.yaml

# Create Pod
kubectl apply -f wi-test.yaml

# Open Interactive Session
kubectl exec -it workload-identity-test --namespace=$NAMESPACE -- /bin/bash

# Inside Pod, run:
curl -H "Metadata-Flavor: Google" http://169.254.169.254/computeMetadata/v1/instance/service-accounts/default/email

To find the value of <NAMESPACE> and <KSA_NAME>, you can use kubectl get namespaces and kubectl get serviceaccounts -n <NAMESPACE> respectively. Assign these to the variables in the code.

Categories
AI

Autogen Ollama Integration

litellm --model ollama/mistral
import autogen

config_list = [
    {
        "api_base": "http://127.0.0.1:8000",
        "api_key" : "NULL",
    }
]

llm_config = {
    "request_timeout" : 800,
    "config_list" : config_list
}

assistant = autogen.AssistantAgent(
    "assistant",
    llm_config = llm_config
)

user_proxy = autogen.UserProxyAgent(
    "user_proxy",
    code_execution_config = {
        "work_dir" : "coding"
    }
)

user_proxy.initiate_chat(
    assistant,
    message ="What is the name of the model you are based on?"
)
Categories
AI

ChatGPT Chatbot

import openai

def chatbot(prompt):
    response = openai.ChatCompletion.create(
        model = "gpt-3.5-turbo", 
        messages = [{'role': 'user', 'content': prompt}],
    )
    return response['choices'][0]['message']['content']
    
if __name__ == "__main__":
    while True:
        user_input = input("You: ")
        if user_input.lower() in ["exit", "quit"]:
            break
        print("Bot: ", chatbot(user_input))
Categories
Data Science

Reinforcement Learning Algorithms

RL Algorithms Flowchart
  • RL Algorithms: Root
  • Model-Free RL: No model.
    • Policy Optimization: Optimize strategy.
    • Policy Gradient, A2C/A3C, PPO, TRPO, DDPG, TD3, SAC
    • Q-Learning: Learn action value.
    • DDPG, TD3, SAC, DQN, C51, QR-DQN, HER
  • Model-Based RL: Uses model.
    • Learn the Model: Learn from experience.
    • World Models, I2A, MBMF, MBVE
    • Given the Model: Known model.
    • AlphaZero
graph TB
  RL["RL Algorithms"]
  MF["Model-Free RL"]
  MB["Model-Based RL"]
  PO["Policy Optimization"]
  QL["Q-Learning"]
  LM["Learn the Model"]
  GM["Given the Model"]
  RL --> MF
  RL --> MB
  MF --> PO
  MF --> QL
  MB --> LM
  MB --> GM
  PO -->|Policy Gradient| PG
  PO -->|A2C/A3C| A2C
  PO -->|PPO| PPO
  PO -->|TRPO| TRPO
  PO -->|DDPG| DDPG1
  PO -->|TD3| TD31
  PO -->|SAC| SAC1
  QL -->|DDPG| DDPG2
  QL -->|TD3| TD32
  QL -->|SAC| SAC2
  QL -->|DQN| DQN
  QL -->|C51| C51
  QL -->|QR-DQN| QR
  QL -->|HER| HER
  LM -->|World Models| WM
  LM -->|I2A| I2A
  LM -->|MBMF| MBMF
  LM -->|MBVE| MBVE
  GM -->|AlphaZero| AZ

Detailed

RL Algorithms (Reinforcement Learning):

  • Algorithms designed to learn optimal actions by interacting with an environment.

Model-Free RL:

  • Algorithms that don’t rely on a model of the environment.Policy Optimization:
    • Directly optimize the strategy of actions.
      • Policy Gradient: Update policies using gradient ascent.
        • A2C/A3C: Advantage Actor-Critic methods.
        • PPO: Proximal Policy Optimization. Ensures stable policy updates.
        • TRPO: Trust Region Policy Optimization. Constrained policy updates.
      • DDPG: Deep Deterministic Policy Gradient. Uses deep networks for continuous actions.
      • TD3: Twin Delayed DDPG. Enhances DDPG stability.
      • SAC: Soft Actor-Critic. Mixes policy optimization with entropy-based exploration.
    Q-Learning:
    • Learn the value of actions.
      • DQN: Deep Q-Network. Uses neural networks to approximate the Q-function.
      • C51: Distributional DQN. Predicts return distributions.
      • QR-DQN: Quantile Regression DQN. A distributional variant.
      • HER: Hindsight Experience Replay. Makes use of unsuccessful experiences.

Model-Based RL:

  • Algorithms that utilize a model of the environment.Learn the Model:
    • Learn the environment model from experience.
      • World Models: Neural networks to simulate the environment’s dynamics.
      • I2A: Imagination-Augmented Agents. Uses the learned model to plan.
      • MBMF: Combines both Model-Based and Model-Free approaches.
      • MBVE: Model-Based Value Expansion.
    Given the Model:
    • Algorithms using a known environment model.
      • AlphaZero: Combines Monte Carlo Tree Search (MCTS) with deep learning.
Categories
AI

Langchain Vertex AI

  • Python package: google-cloud-aiplatform
  • Environment setup:
    • Credentials (gcloud, workload identity)
    • Or GOOGLE_APPLICATION_CREDENTIALS environment variable

Install Command:

pip install langchain google-cloud-aiplatform

Documentation:

# Code 1
from langchain.llms import VertexAI
llm = VertexAI()
print(llm("What are some of the pros and cons of Python as a programming language?"))

# Code 2
from langchain.prompts import PromptTemplate
template = "Question: {question}\nAnswer: Let's think step by step."
prompt = PromptTemplate.from_template(template)
chain = prompt | llm
question = "Who was the president in the year Justin Beiber was born?"
print(chain.invoke({"question": question}))

# Code 3
llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)
question = "Write a python function that checks if a string is a valid email address"
print(llm(question))

# Code 4
result = llm.generate([question])
result.generations

# Code 5
import asyncio
asyncio.run(llm.agenerate([question]))

# Code 6
import sys
for chunk in llm.stream(question):
    sys.stdout.write(chunk)
    sys.stdout.flush()
Categories
AI

GeniA

FunctionQuestionResponse
list_namespaceslist all the namespaceListed namespaces: default, kube-system
list_namespaced_secretlist all the namespaced secretNo secrets in the “default” namespace
list_deploymentShould I go ahead and list the kubernetes deployments for a given namespace?Listed 13 deployments in the “default” namespace
describe_deploymentexplain me more about seo deployGeneral explanation of Kubernetes deployment provided
list_namespaced_eventslist default namespaced eventsListed events in the “default” namespace with count, timestamps, message, and reason
scale_deployment_resourcesscale deployment resource for seo. set it to 64mb memory, 200m cpuError, required additional parameters. Later, successfully updated resources for “seo” deployment
list_namespaced_pod_eventslist namespace default pod events . seo deployListed events for pods in “seo” deployment about missing CPU requests
get_service_ownerget a service owner of any service in the default namespaceError, required service name. Later, no information available about the owner of “seo” service
get_top_k_containers_usageget top 5 containers usage in eks in region [REGION]Listed top 5 containers usage in [REGION] region with container name, image count, and size in MB
get_pods_errors_events_by_deploymentget pods errors events by seo deploymentListed events for pods in “seo” deployment about missing CPU requests
kubernetes_get_service_errorskubernetes get service errors seoListed some events in the “default” namespace, not specifically for “seo” service
Categories
AI

Chat GPT API Node.js

npm install openai

ES6

import OpenAI from "openai";

const openai = new OpenAI();

const chatCompletion = await openai.chat.completions.create({
    messages: [{ role: "user", content: "Say this is a test" }],
    model: "gpt-3.5-turbo",
});

console.log(chatCompletion.choices[0].message.content);

package.json

{
  "dependencies": {
    "openai": "^4.12.4"
  },
  "type": "module"
}

CommonJS

const OpenAI = require("openai");

const openai = new OpenAI();

openai.chat.completions.create({
  messages: [{ role: "user", content: "Say this is a test" }],
  model: "gpt-3.5-turbo",
})
.then(chatCompletion => {
  console.log(chatCompletion.choices[0].message.content);
})
.catch(console.error);
FeatureCommonJSES6 Modules
Used InNode.js, BrowserifyModern browsers, Node.js with config
Import/Export Syntaxconst toy = require('toy');
module.exports = toy;
import toy from 'toy';
export default toy;
Pros1. Easy to use
2. Dynamic loading
3. Well-supported in Node.js
1. Faster loading
2. Static analysis
3. Modern syntax
Cons1. Slower loading
2. Older syntax
1. More complex syntax
2. Need configuration

Which is Best?

  • For New Projects: ES6 is modern and efficient.
  • For Older Projects: CommonJS is well-supported and easy.

Combined Recommendation: Choose CommonJS for simplicity and legacy support. Choose ES6 for modern features and better optimization.

Categories
Other

Capitalising

  1. Skill Upgrading:
    • Learn machine learning frameworks
    • Acquire data engineering skills
  2. Market Research:
    • Identify AI gaps in current market
    • Validate problem-solution fit
  3. Networking:
    • Attend AI-focused meetups
    • Partner with data scientists
  4. Prototype:
    • Develop MVP using AI
    • User feedback loop
  5. Funding:
    • Create pitch deck
    • Approach VCs specialized in AI
  6. Launch:
    • Go-to-market strategy
    • Measure KPIs
  7. Scale:
    • Optimize algorithms
    • Expand user base
  8. Exit Strategy:
    • Identify acquisition targets
    • Plan IPO
  9. Continuous Learning:
    • Stay updated with AI trends
    • Iterate business model
  10. Intellectual Property:
    • File patents
    • License algorithms
Categories
Other

Becoming Rich

Method to Become RichSimple Explanation
Follow a straightforward formulaRohrssen believes building wealth is straightforward and formulaic.
Mindset and OptionsMaintain a mindset of abundance even when financially constrained.
Surround Yourself with Right PeopleKeep company with people who uplift you and share your vision.
Learn from FailuresEntrepreneurial ventures aren’t risky; you learn something even if you fail.
AdaptabilityBe adaptable to market changes and consumer desires.
Investment MeetingsWin or lose investment in the first two minutes; maintain frame control.
Genuine Frame in SalesBe genuine in your approach to sales and jobs for better success.
Balance IdentityHave something other than your business to tie your identity to.
Review InfluencesRegularly review your circle and influences to ensure they’re uplifting.
Utilize PressureUse the pressure from investors or circumstances to drive business growth.
Move Towards ProfitShift business towards profit-based rather than growth at all cost.
Categories
Other

Secrets of Viral YouTube Shorts

StrategyDetails
Analyze Popular ShortsStudies shorts from popular creators like Mr. Beast to understand what makes them viral.
ReadabilityAims for a readability level of fifth grade or under for wider audience reach.
PersonalizationMakes content personality-based to engage viewers.
Retention OptimizationUses analytics to aim for a 90% retention rate; trims video ends to improve retention.
StorytellingUses hooks and narratives to make content engaging.
Visual FramingConsistently frames videos for brand recognition and better visibility.
Platform DifferentiationTailors content according to the platform (TikTok, Instagram Reels, YouTube Shorts).
Audience AvatarTargets content to specific audience types, like her younger self or nieces.
SharabilityFocuses on making content that is easily shareable.
Content PlanningUses bullet points or rough scripts to plan videos, revises after filming.
PacingMaintains a balance in pacing to keep the audience engaged without overwhelming them.