CSbyGB - Pentips
Buy me a tea
  • CS By GB - PenTips
    • Welcome to CSbyGB's Pentips
  • Networking, Protocols and Network pentest
    • Basics
    • DNS
    • FTP
    • HTTP & HTTPS
    • IMAP
    • IPMI
    • MSSQL
    • MYSQL
    • NFS
    • Oracle TNS
    • POP3
    • RDP
    • RPC
    • Rservices
    • Rsync
    • SMB
    • SMTP
    • SNMP
    • SSH
    • VOIP and related protocols
    • Winrm
    • WMI
    • Useful tips when you find unknown ports
  • Ethical Hacking - General Methodology
    • Introduction
    • Information Gathering
    • Scanning & Enumeration
    • Exploitation (basics)
    • Password Attacks
    • Post Exploitation
    • Lateral Movement
    • Proof-of-Concept
    • Post-Engagement
    • MITRE ATT&CK
  • External Pentest
    • External Pentest
  • Web Pentesting
    • Introduction to HTTP and web
    • Enumeration
    • OWASP Top 10
    • General Methodo & Misc Tips
    • Web Services and API
    • Vunerabilities and attacks
      • Clickjacking
      • CORS (Misconfigurations)
      • CSRF
      • SSRF
      • Bypass captcha
      • Template Injection (client and server side)
      • MFA bypass
      • XXE
    • Exposed git folder
    • Docker exploitation and Docker vulnerabilities
    • Websockets
  • Mobile App Pentest
    • Android
    • IOS
  • Wireless Pentest
    • Wireless pentest
  • Cloud Pentest
    • Cloud Pentest
    • Google Cloud Platform
    • AWS
  • Thick Client Pentest
    • Thick Client
  • Hardware Pentest
    • ATM
    • IoT
  • Secure Code Review
    • Secure code review
    • Java notes for Secure Code Review
  • AI & AI Pentest
    • MITRE ATLAS
    • OWASP ML and LLM
    • Hugging face
    • AI Python
    • Gemini
    • Ollama
  • Checklist
    • Web Application and API Pentest Checklist
    • Linux Privesc Checklist
    • Mobile App Pentest Checklist
  • Tools
    • Burpsuite
    • Android Studio
    • Frida
    • CrackMapExec
    • Netcat and alternatives
    • Nmap
    • Nuclei
    • Evil Winrm
    • Metasploit
    • Covenant
    • Mimikatz
    • Passwords, Hashes and wordlist tools
    • WFuzz
    • WPScan
    • Powershell Empire
    • Curl
    • Vulnerability Scanning tools
    • Payload Tools
    • Out of band Servers
    • STEWS
    • Webcrawlers
    • Websocat
  • VM and Labs
    • General tips
    • Setup your pentest lab
  • Linux
    • Initial Foothold
    • Useful commands and tools for pentest on Linux
    • Privilege Escalation
      • Kernel Exploits
      • Password and file permission
      • Sudo
      • SUID
      • Capabilities
      • Scheduled tasks
      • NFS Root Squashing
      • Services
      • PATH Abuse
      • Wildcard Abuse
      • Privileged groups
      • Exploit codes Cheat Sheet
  • Windows
    • Offensive windows
    • Enumeration and general Win tips
    • Privilege Escalation
    • Active Directory
    • Attacking Active Directory
      • LLMNR Poisoning
      • SMB Relay Attacks
      • Shell Access
      • IPv6 Attacks
      • Passback Attacks
      • Abusing ZeroLogon
    • Post-Compromise Enumeration
      • Powerview or SharpView (.NET equivalent)
      • AD Manual Enumeration
      • Bloodhound
      • Post Compromise Enumeration - Resources
    • Post Compromise Attacks
      • Pass the Password / Hash
      • Token Impersonation - Potato attacks
      • Kerberos
      • GPP/cPassword Attacks
      • URL File Attack
      • PrintNightmare
      • Printer Bug
      • AutoLogon exploitation
      • Always Installed Elevated exploitation
      • UAC Bypass
      • Abusing ACL
      • Unconstrained Delegation
    • Persistence
    • AV Evasion
    • Weaponization
    • Useful commands in Powershell, CMD and Sysinternals
    • Windows Internals
  • Programming
    • Python programming
    • My scripts
    • Kotlin
  • Binary Exploitation
    • Assembly
    • Buffer Overflow - Stack based - Winx86
    • Buffer Overflow - Stack based - Linux x86
  • OSINT
    • OSINT
    • Create an OSINT lab
    • Sock Puppets
    • Search engines
    • OSINT Images
    • OSINT Email
    • OSINT Password
    • OSINT Usernames
    • OSINT People
    • OSINT Social Media
    • OSINT Websites
    • OSINT Business
    • OSINT Wireless
    • OSINT Tools
    • Write an OSINT report
  • Pentester hardware toolbox
    • Flipper Zero
    • OMG cables
    • Rubber ducky
  • Post Exploitation
    • File transfers between target and attacking machine
    • Maintaining Access
    • Pivoting
    • Cleaning up
  • Reporting
    • How to report your findings
  • Red Team
    • Red Team
    • Defenses Enumeration
    • AV Evasion
  • Writeups
    • Hackthebox Tracks
      • Hackthebox - Introduction to Android Exploitation - Track
    • Hackthebox Writeups
      • Hackthebox - Academy
      • Hackthebox - Access
      • Hackthebox - Active
      • Hackthebox - Ambassador
      • Hackthebox - Arctic
      • Hackthebox - Awkward
      • Hackthebox - Backend
      • Hackthebox - BackendTwo
      • Hackthebox - Bastard
      • Hackthebox - Bastion
      • Hackthebox - Chatterbox
      • Hackthebox - Devel
      • Hackthebox - Driver
      • Hackthebox - Explore
      • Hackthebox - Forest
      • Hackthebox - Good games
      • Hackthebox - Grandpa
      • Hackthebox - Granny
      • Hackthebox - Inject
      • Hackthebox - Jeeves
      • Hackthebox - Jerry
      • Hackthebox - Lame
      • Hackthebox - Late
      • Hackthebox - Love
      • Hackthebox - Mentor
      • Hackthebox - MetaTwo
      • Hackthebox - Monteverde
      • Hackthebox - Nibbles
      • Hackthebox - Optimum
      • Hackthebox - Paper
      • Hackthebox - Photobomb
      • Hackthebox - Poison
      • Hackthebox - Precious
      • Hackthebox - Querier
      • Hackthebox - Resolute
      • Hackthebox - RouterSpace
      • Hackthebox - Sauna
      • Hackthebox - SecNotes
      • Hackthebox - Shoppy
      • Hackthebox - Soccer
      • Hackthebox - Steamcloud
      • Hackthebox - Toolbox
      • Hackthebox - Vault
      • Hackthebox - Updown
    • TryHackme Writeups
      • TryHackMe - Anonymous
      • TryHackMe - Blaster
      • TryHackMe - CMesS
      • TryHackMe - ConvertMyVideo
      • TryHackMe - Corridor
      • TryHackMe - LazyAdmin
      • TryHackMe - Looking Glass
      • TryHackMe - Nahamstore
      • TryHackMe - Overpass3
      • TryHackMe - OWASP Top 10 2021
      • TryHackMe - SimpleCTF
      • TryHackMe - SQL Injection Lab
      • TryHackMe - Sudo Security Bypass
      • TryHackMe - Tomghost
      • TryHackMe - Ultratech
      • TryHackMe - Vulnversity
      • TryHackMe - Wonderland
    • Vulnmachines Writeups
      • Web Labs Basic
      • Web Labs Intermediate
      • Cloud Labs
    • Mobile Hacking Lab
      • Mobile Hacking Lab - Lab - Config Editor
      • Mobile Hacking Lab - Lab - Strings
    • Portswigger Web Security Academy Writeups
      • PS - DomXSS
      • PS - Exploiting vulnerabilities in LLM APIs
    • OWASP projects and challenges writeups
      • OWASP MAS Crackmes
    • Vulnerable APIs
      • Vampi
      • Damn Vulnerable Web Service
      • Damn Vulnerable RESTaurant
    • Various Platforms
      • flAWS 1&2
  • Digital skills
    • How to make a gitbook
    • Marp
    • Linux Tips
    • Docker
    • VSCodium
    • Git Tips
    • Obsidian
  • Durable skills
    • Durable skills wheel/Roue des compétences durables
  • Projects
    • Projects
      • Technical Projects
      • General Projects
  • Talks
    • My Talks about Web Pentest
    • My talks about Android Application hacking
    • Other of my talks and Podcast
  • Resources
    • A list of random resources
Powered by GitBook
On this page
  • Create an environment to use AI in Python script
  • Installations
  • Setting up the API key
  • Building LLM prompts with variables
  • Iteratively updating AI prompts using lists
  • Using dictionaries to complete high priority tasks using AI
  • Customizing recipes with lists, dictionaries and AI
  • Helping AI make decisions
  • Using files in python and AI
  • Loading and using your own data
  • Extracting useful information from journal entries
  • Work with csv files and IA
  • Work with functions
  1. AI & AI Pentest

AI Python

PreviousHugging faceNextGemini

Last updated 4 months ago

Notes from the course of

Mostly these are examples of practical use but you can have an overview to know how you could adapt them to you context.

  • Check out my page on python if you need a refresher on python

Create an environment to use AI in Python script

Installations

If you want an all made setup, you can install Anaconda from . In linux you just have to run bash Anaconda3-2024.10-1-Linux-x86_64.sh Once this is done you will have access to a gui to create your Jupyter notebook To launch said gui you can do as follow:

cd anaconda3
cd bin
./anaconda-navigator

You will need to create and account in anaconda cloud and login.

  • If you need any help, you can find the official doc

Setting up the API key

import os
from dotenv import load_dotenv
from openai import OpenAI

# Get the OpenAI API key from the .env file
## For security reason I recommend that you rather use a vault with password rotation if you want to deploy something more permanently
load_dotenv('.env', override=True)
openai_api_key = os.getenv('OPENAI_API_KEY')
client = OpenAI(api_key = openai_api_key)

# Method to get llm response
def get_llm_response(prompt):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "system",
                "content": "You are an AI assistant.",
            },
            {"role": "user", "content": prompt},
        ],
        temperature=0.0,
    )
    response = completion.choices[0].message.content
    return response

# test our method
prompt = "What is the capital of France?"
response = get_llm_response(prompt)
print(response)

# We can modify the temperature to change the randomness of the output
def get_llm_response_new_temp(prompt):
    completion = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[
            {
                "role": "system",
                "content": "You are an AI assistant.", 
            },
            {"role": "user", "content": prompt},
        ],
        temperature=1.0, # change this to a value between 0 and 2
    )
    response = completion.choices[0].message.content
    return response

Building LLM prompts with variables

from helper_functions import print_llm_response
print_llm_response("What is the capital of France?")
# Will print "The capital of France is Paris."
number_of_lines = 8
prompt = f"generate a poem with {number_of_lines} lines"
print_llm_response(prompt)
print(prompt)
# Will print a poem with 8 lines
# Would also work this way
number_of_lines = 8
prompt = f"generate a poem with {number_of_lines} lines"
response = get_llm_response(prompt)
print(response)

Iteratively updating AI prompts using lists

from helper_functions import print_llm_response, get_llm_response

# Translate the flavors in ice_cream_flavors to Spanish
ice_cream_flavors = ["Vanilla", "Strawberry"]

for flavor in ice_cream_flavors:
    prompt = f"""For the ice cream flavor listed below,
    Translate each flavor in spanish
    
    Flavor: {flavor}
    
    """
    ### --------------- ###
    print_llm_response(prompt)
# This prints
'''
____________________________________________________________________________________________________
Vanilla in Spanish is "Vainilla."
____________________________________________________________________________________________________


____________________________________________________________________________________________________
Strawberry in Spanish is "fresa."
____________________________________________________________________________________________________
'''

Using dictionaries to complete high priority tasks using AI

from helper_functions import print_llm_response, get_llm_response
#task example, large list not ordered by priority. Want to prioritize
list_of_tasks = [
    "Compose a brief email to my boss explaining that I will be late for tomorrow's meeting.",
    "Write a birthday poem for Otto, celebrating his 28th birthday.",
    "Write a 300-word review of the movie 'The Arrival'.",
    "Draft a thank-you note for my neighbor Dapinder who helped water my plants while I was on vacation.",
    "Create an outline for a presentation on the benefits of remote work."
]
#instead of that unorganized large list, divide tasks by priority
high_priority_tasks = [
    "Compose a brief email to my boss explaining that I will be late for tomorrow's meeting.",
    "Create an outline for a presentation on the benefits of remote work."
]

medium_priority_tasks = [
    "Write a birthday poem for Otto, celebrating his 28th birthday.",
    "Draft a thank-you note for my neighbor Dapinder who helped water my plants while I was on vacation."
]

low_priority_tasks = [
    "Write a 300-word review of the movie 'The Arrival'."
]
#create dictionary with all tasks
#dictionaries can contain lists!
prioritized_tasks = {
    "high_priority": high_priority_tasks,
    "medium_priority": medium_priority_tasks,
    "low_priority": low_priority_tasks
}
#complete high priority tasks 
for task in prioritized_tasks["high_priority"]:
    print_llm_response(task)

Customizing recipes with lists, dictionaries and AI

# Update the following dictionary 
# with your own preferences 

from helper_functions import print_llm_response, get_llm_response

my_food_preferences = {
        "dietary_restrictions": ["no spicy food"], #List with dietary restrictions
        "favorite_ingredients": ["carrots", "olives", "tomatoes", "chicken", "fish"], #List with top three favorite ingredients
        "experience_level": "intermediate", #Experience level as a chef
        "maximum_spice_level": 0 #Spice level in a scale from 1 to 10
}

print(my_food_preferences)

prompt = f"""Please suggest a recipe that tries to include 
the following ingredients: 
{my_food_preferences["favorite_ingredients"]}.
The recipe should adhere to the following dietary restrictions:
{my_food_preferences["dietary_restrictions"]}.
The difficulty of the recipe should be: 
{my_food_preferences["experience_level"]}
The maximum spice level on a scale of 10 should be: 
{my_food_preferences["maximum_spice_level"]} 
Provide a detailed description of the recipe.
"""

print_llm_response(prompt)

Helping AI make decisions

from helper_functions import print_llm_response
# Go through the list of tasks if a task from the list takes more than 15 minutes we can do it
task_list = [
    {
        "description": "Compose a brief email to my boss explaining that I will be late for next week's meeting.",
        "time_to_complete": 3
    },
    {
        "description": "Create an outline for a presentation on the benefits of remote work.",
        "time_to_complete": 60
    },
    {
        "description": "Write a 300-word review of the movie 'The Arrival'.",
        "time_to_complete": 30
    },
    {
        "description": "Create a shopping list for tofu and olive stir fry.",
        "time_to_complete": 5
    }
]

for task in task_list:
    if task["time_to_complete"] >= 15:
        task_to_do = task["description"]
        print_llm_response(task_to_do)

Using files in python and AI

from helper_functions import get_llm_response
from IPython.display import display, Markdown

# We open a file with an email in it
f = open("email.txt", "r")
email = f.read()
f.close()

# We ask the AI to extract the bullet points from the email
prompt = f"""Extract bullet points from the following email. 
Include the sender information. 

Email:
{email}"""

print(prompt)
bullet_points = get_llm_response(prompt)
print(bullet_points)

# Print in Markdown format
display(Markdown(bullet_points))

# Here is a prompt to identify all of the countries mentioned 
# in the email
prompt = f"""Make a list of all the countries mentioned in the email

Email:
{email}
"""

countries = get_llm_response(prompt)
print(countries)

# Other example get a list of activities the person mentioned in the email
prompt = f"""Make a list of all the activies that Daniel did and mentioned in the email

Email:
{email}
"""

activities = get_llm_response(prompt)
print(activities)

Loading and using your own data

from helper_functions import upload_txt_file, list_files_in_directory, print_llm_response

# list out the files in the current working directory:
list_files_in_directory()

# Using AI to determine if a file is relevant for a specific context
# List of the journal files
files = ["cape_town.txt", "madrid.txt", "rio_de_janeiro.txt", "sydney.txt", "tokyo.txt"]
for file in files:
    # Read journal file for the city
    f = open(file, "r")
    journal = f.read()
    f.close()

    # Create prompt
    prompt = f"""Respond with "Relevant" or "Not relevant": 
    the journal describes restaurants and their specialties. 

    Journal:
    {journal}"""

    # Use LLM to determine if the journal entry is useful
    print(f"{file} -> {get_llm_response(prompt)}")

Extracting useful information from journal entries

from helper_functions import *
from IPython.display import display, HTML
files = ["cape_town.txt", "istanbul.txt", "new_york.txt", "paris.txt", 
          "rio_de_janeiro.txt", "sydney.txt", "tokyo.txt"]

for file in files:
    #Open file and read contents
    journal_entry = read_journal(file)

    #Extract restaurants and display csv
    prompt =  f"""Please extract a comprehensive list of the restaurants 
    and their respective best dishes mentioned in the following journal entry. 
    
    Ensure that each restaurant name is accurately identified and listed. 
    Provide your answer in CSV format, ready to save.

    Exclude the "```csv" declaration, don't add spaces after the 
    comma, include column headers.

    Format:
    Restaurant, Dish
    Res_1, Dsh_1
    ...

    Journal entry:
    {journal_entry}
    """
    
    print(file)
    print_llm_response(prompt)
    print("") # Prints a blank line!
# We could modify the prompt to display the response in html
# We would print the reponse like this
display(HTML(html_response))

# We could also save this to an html file
f = open("highlighted_text.html", 'w') 
f.write(html_response) 
f.close()

Work with csv files and IA

# Imports
from helper_functions import get_llm_response, print_llm_response, display_table
from IPython.display import Markdown
import csv

f = open("itinerary.csv", 'r')

# Read data from the file
csv_reader = csv.DictReader(f)
itinerary = []
for row in csv_reader:
    print(row)
    itinerary.append(row)

f.close()

print(itinerary)
# display the table instead of raw csv
display_table(itinerary)

# Filter data
# Create an empty list to store the filtered data
filtered_data = []

# Filter by country
for trip_stop in itinerary:
    # For example: get the destinations located in "Japan"
    if trip_stop["Country"] == "Japan":
        filtered_data.append(trip_stop)

# Using AI to suggest trip activities
# Select the first destination from the itinerary list
trip_stop = itinerary[0]
print(trip_stop)
# Create variables to store all the individual items from ```trip_stop```:
city = trip_stop["City"]
country = trip_stop["Country"]
arrival = trip_stop["Arrival"]
departure = trip_stop["Departure"]
# Write a prompt to get activity suggestions for your trip destination:
prompt = f"""I will visit {city}, {country}, from {arrival} to {departure}. 
Please create a detailed daily itinerary."""

print(prompt)

# Use Markdown to display the LLM response nicely

# Store the LLM response
response = get_llm_response(prompt)

# Print in Markdown format
display(Markdown(response))

Work with functions

from helper_functions import print_llm_response
from IPython.display import Markdown, display

# create a function to print the journal
def print_journal(file):
    f = open(file, "r")
    journal = f.read()
    f.close()
    print(journal)
# Read in the Sydney journal
print_journal("sydney.txt")

# define a function that returns a variable, rather than printing to screen
def read_journal(file):
    f = open(file, "r")
    journal = f.read()
    f.close()
    # print(journal)
    return journal

journal_tokyo = read_journal("tokyo.txt")

# example of use of a function with AI: function that takes in a filename as a parameter, uses an LLM to create a three bullet point summary, and returns the bullets as a string.
def create_bullet_points(file):
    # Read in the file and store the contents as a string
    with open(file, "r") as f:
        file_contents = f.read()

    # Write a more specific prompt that includes the file contents
    prompt = f"""Summarize the following text into three bullet points:\n\
{file_contents}
    """
    bullets = print_llm_response(prompt)

    # Return the bullet points
    return bullets

# This line of code runs your function for istanbul.txt
create_bullet_points("istanbul.txt")
DeepLearning.AI - AI Python for Beginners
here
here
here