Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Flake8cleanup #649

Open
wants to merge 53 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
23e73e2
whitespace
evelynmitchell Nov 27, 2024
6f0c257
whitespace
evelynmitchell Nov 27, 2024
438a937
whitespace
evelynmitchell Nov 28, 2024
95c2572
comment typo cleanup
evelynmitchell Nov 28, 2024
f495700
flake8 warnings in ascii art
evelynmitchell Nov 28, 2024
c79c832
whitespace flake8
evelynmitchell Nov 28, 2024
7eeda2c
whitespace flake8
evelynmitchell Nov 28, 2024
f167ee6
whitespace
evelynmitchell Nov 28, 2024
7475be3
whitespace
evelynmitchell Nov 28, 2024
29d10a4
whitespace flake8
evelynmitchell Nov 28, 2024
639e960
whitespace flake8
evelynmitchell Nov 28, 2024
ce16edd
whitespace flake8
evelynmitchell Nov 28, 2024
2dcc770
whitespace flake8
evelynmitchell Nov 28, 2024
8d77a69
whitespace flake8
evelynmitchell Nov 28, 2024
c94fca1
whitespace flake8
evelynmitchell Nov 28, 2024
a56954a
whitespace flake8
evelynmitchell Nov 28, 2024
2add997
whitespace flake8
evelynmitchell Nov 28, 2024
39dbccf
whitespace, comment flake8
evelynmitchell Nov 28, 2024
1fa72ba
comment flake8
evelynmitchell Nov 28, 2024
5322031
whitespace flake8
evelynmitchell Nov 28, 2024
1b2b2b0
whitespace
evelynmitchell Nov 28, 2024
78b0f8b
whitespace flake8
evelynmitchell Nov 28, 2024
4af9661
whitespace flake8
evelynmitchell Nov 28, 2024
2400252
whitespace
evelynmitchell Nov 28, 2024
6156c4f
whitespace flake8
evelynmitchell Nov 28, 2024
7fc75fc
whitespace flake8
evelynmitchell Nov 28, 2024
ab5a702
whitespace flake8
evelynmitchell Nov 28, 2024
8e0e918
whitespace flake8
evelynmitchell Nov 28, 2024
18959f9
suppress pre colon error flake8
evelynmitchell Nov 28, 2024
844f33e
whitespace colon suppress
evelynmitchell Nov 28, 2024
2e88da7
suppress unused import warning flake8
evelynmitchell Nov 28, 2024
ea649b9
whitespace colon suppress
evelynmitchell Nov 28, 2024
b162e76
whitespace colon suppress
evelynmitchell Nov 28, 2024
42af348
whitespace
evelynmitchell Nov 28, 2024
7405beb
whitespace pre colon
evelynmitchell Nov 28, 2024
797cde4
whitespace, pre-colon
evelynmitchell Nov 28, 2024
141ead9
whitespace pre-colon
evelynmitchell Nov 28, 2024
89d082a
whitespace precolon
evelynmitchell Nov 28, 2024
dc5be00
whitespace precolon
evelynmitchell Nov 28, 2024
ba5d911
suppress unused import errors
evelynmitchell Nov 28, 2024
226ed56
remove unused import
evelynmitchell Nov 28, 2024
47ce926
import not at top of file suppress
evelynmitchell Nov 28, 2024
3fcec30
whitespace
evelynmitchell Nov 28, 2024
40b2133
whitespace
evelynmitchell Nov 28, 2024
58ee391
whitespace
evelynmitchell Nov 28, 2024
aee1eff
whitespace
evelynmitchell Nov 28, 2024
f2801f1
whitespace
evelynmitchell Nov 28, 2024
f166edf
whitespace
evelynmitchell Nov 28, 2024
3dfcbab
whitespace
evelynmitchell Nov 28, 2024
de65b58
remove unused import
evelynmitchell Nov 28, 2024
4b782a0
whitespace
evelynmitchell Nov 28, 2024
13e2a66
whitespace
evelynmitchell Nov 28, 2024
703c967
add flake8 config
evelynmitchell Nov 28, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[flake8]
# line too long
extend-ignore = E501
8 changes: 4 additions & 4 deletions concurrent_mix.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
insurance. Provide guidance on how to navigate the complexities of Delaware's
corporate law and ensure that all hiring practices are in compliance with
state and federal regulations.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
autosave=False,
Expand All @@ -53,7 +53,7 @@
practices are in compliance with state and federal regulations. Consider the
implications of hiring foreign nationals and the requirements for obtaining
necessary visas and work permits.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
autosave=False,
Expand All @@ -75,15 +75,15 @@
programming languages, and data structures. Outline the key responsibilities,
including designing and developing AI agents, integrating with existing systems,
and ensuring scalability and performance.
""",
""", # noqa: W291, W293
"""
Generate a detailed job description for a Prompt Engineer, including
required skills and responsibilities. Ensure the description covers the
necessary technical expertise, such as proficiency in natural language processing,
machine learning, and software development. Outline the key responsibilities,
including designing and optimizing prompts for AI systems, ensuring prompt
quality and consistency, and collaborating with cross-functional teams.
""",
""", # noqa: W291, W293
]

# Run agents with tasks concurrently
Expand Down
9 changes: 7 additions & 2 deletions example_async_vs_multithread.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
import os
import asyncio
import threading
from swarms import Agent
from swarm_models import OpenAIChat
import time
Expand Down Expand Up @@ -40,7 +39,8 @@
streaming_on=False,
)

# Function to measure time and memory usage

# Function to measure time and memory usage # noqa: E302
def measure_time_and_memory(func):
def wrapper(*args, **kwargs):
start_time = time.time()
Expand All @@ -52,6 +52,7 @@ def wrapper(*args, **kwargs):
return result
return wrapper


# Function to run the agent asynchronously
@measure_time_and_memory
async def run_agent_async():
Expand All @@ -61,11 +62,15 @@ async def run_agent_async():
)
)


# Function to run the agent on another thread
@measure_time_and_memory
def run_agent_thread():
asyncio.run(run_agent_async())


# Run the agent asynchronously and on another thread to test the speed
asyncio.run(run_agent_async())
run_agent_thread()

# noqa: W391
6 changes: 3 additions & 3 deletions new_features_examples/persistent_legal_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
5. Maintain consistency across related documents
6. Output <DONE> only when document is complete and verified

Remember: All output should be marked as 'DRAFT' and require professional legal review."""
Remember: All output should be marked as 'DRAFT' and require professional legal review.""" # noqa: W291, W293


def create_vc_legal_agent():
Expand Down Expand Up @@ -67,7 +67,7 @@ def generate_legal_document(agent, document_type, parameters):

Returns:
str: The generated document content
"""
""" # noqa: W291, W293
prompt = f"""
Generate a {document_type} with the following parameters:
{parameters}
Expand All @@ -80,7 +80,7 @@ def generate_legal_document(agent, document_type, parameters):
5. Output <DONE> when complete

Include [REQUIRES LEGAL REVIEW] tags for sections needing attorney attention.
"""
""" # noqa: W291, W293

return agent.run(prompt)

Expand Down
10 changes: 5 additions & 5 deletions new_features_examples/real_estate_agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def __init__(self, api_key: str):

Args:
api_key (str): PropertyRadar API key
"""
""" # noqa: W291, W293
self.api_key = api_key
self.base_url = "https://api.propertyradar.com/v1"
self.session = requests.Session()
Expand Down Expand Up @@ -99,7 +99,7 @@ def search_properties(

Returns:
List[PropertyListing]: List of matching properties
"""
""" # noqa: W291, W293
try:
# Build the query parameters
params = {
Expand Down Expand Up @@ -186,7 +186,7 @@ def __init__(
model_name (str): Name of the LLM model to use
temperature (float): Temperature setting for the LLM
saved_state_path (Optional[str]): Path to save agent state
"""
""" # noqa: W291, W293
self.property_api = PropertyRadarAPI(propertyradar_api_key)

# Initialize OpenAI model
Expand Down Expand Up @@ -229,7 +229,7 @@ def _get_system_prompt(self) -> str:
- Local business development plans
- Traffic patterns and accessibility
- Nearby amenities and businesses
- Future development potential"""
- Future development potential""" # noqa: W291, W293

def search_properties(
self,
Expand All @@ -251,7 +251,7 @@ def search_properties(

Returns:
List[Dict[str, Any]]: List of properties with analysis
"""
""" # noqa: W291, W293
try:
# Search for properties
properties = self.property_api.search_properties(
Expand Down
10 changes: 5 additions & 5 deletions new_features_examples/rearrange_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@
so the finance team can take actionable steps to cut off unproductive spending. You also monitor and
dynamically adapt the swarm to optimize their performance. Finally, you summarize their findings
into a coherent report.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
dashboard=False,
Expand All @@ -45,7 +45,7 @@
(e.g., marketing, operations, utilities, etc.), and flagging areas where there seems to be excessive spending.
You will provide a detailed breakdown of each category, along with specific recommendations for cost-cutting.
Pay close attention to monthly recurring subscriptions, office supplies, and non-essential expenditures.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
dashboard=False,
Expand All @@ -65,7 +65,7 @@
such as highlighting the specific transactions that can be immediately cut off and summarizing the areas
where the company is overspending. Your summary will be used by the BossAgent to generate the final report.
Be clear and to the point, emphasizing the urgency of cutting unnecessary expenses.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
dashboard=False,
Expand All @@ -85,7 +85,7 @@
and providing recommendations for potential cost reduction. After the analysis, the SummaryGenerator will then
consolidate all the findings into an actionable summary that the finance team can use to immediately cut off unnecessary expenses.
Together, your collaboration is essential to streamlining and improving the company’s financial health.
"""
""" # noqa: W291, W293

# Create a list of agents
agents = [boss_agent, worker1, worker2]
Expand All @@ -112,7 +112,7 @@
analysis of recent transactions to identify which expenses can be cut off to improve profitability.
Analyze the provided transaction data and create a detailed report on cost-cutting opportunities,
focusing on recurring transactions and non-essential expenditures.
"""
""" # noqa: W291, W293

# Run the swarm system with the task
output = agent_system.run(task)
Expand Down
16 changes: 8 additions & 8 deletions new_features_examples/spike/agent_rearrange_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
- You send structured data to the swarm through the users form they make
- then connect rag for every agent using llama index to remember all the students data
- structured outputs
"""
""" # noqa: W291, W293

import os
from dotenv import load_dotenv
Expand Down Expand Up @@ -50,7 +50,7 @@ class CollegesRecommendation(BaseModel):
Focus on creating actionable, well-reasoned final recommendations that
balance all relevant factors and stakeholder input.

"""
""" # noqa: W291, W293

function_caller = OpenAIFunctionCaller(
system_prompt=FINAL_AGENT_PROMPT,
Expand All @@ -71,7 +71,7 @@ class CollegesRecommendation(BaseModel):
6. Create a comprehensive student profile summary

Always consider both quantitative metrics (GPA, test scores) and qualitative aspects
(personal growth, challenges overcome, unique perspectives).""",
(personal growth, challenges overcome, unique perspectives).""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -94,7 +94,7 @@ class CollegesRecommendation(BaseModel):
6. Track historical admission data and acceptance rates

Focus on providing accurate, comprehensive information about each institution
while considering both academic and cultural fit factors.""",
while considering both academic and cultural fit factors.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -117,7 +117,7 @@ class CollegesRecommendation(BaseModel):
6. Explain the reasoning behind each match

Always provide a balanced list with realistic expectations while
considering both student preferences and admission probability.""",
considering both student preferences and admission probability.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -140,7 +140,7 @@ class CollegesRecommendation(BaseModel):
6. Document key points of agreement and disagreement

Maintain objectivity while ensuring all important factors are thoroughly discussed
and evaluated.""",
and evaluated.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -163,7 +163,7 @@ class CollegesRecommendation(BaseModel):
6. Suggest alternative options when appropriate

Focus on constructive criticism that helps improve the final college list
while maintaining realistic expectations.""",
while maintaining realistic expectations.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -188,7 +188,7 @@ class CollegesRecommendation(BaseModel):

Focus on creating actionable, well-reasoned final recommendations that
balance all relevant factors and stakeholder input.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand Down
18 changes: 9 additions & 9 deletions new_features_examples/spike/test.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
- You send structured data to the swarm through the users form they make
- then connect rag for every agent using llama index to remember all the students data
- structured outputs
"""
""" # noqa: W291, W293

import os
from dotenv import load_dotenv
Expand Down Expand Up @@ -50,7 +50,7 @@ class CollegesRecommendation(BaseModel):
Focus on creating actionable, well-reasoned final recommendations that
balance all relevant factors and stakeholder input.

"""
""" # noqa: W291, W293

function_caller = OpenAIFunctionCaller(
system_prompt=FINAL_AGENT_PROMPT,
Expand All @@ -71,7 +71,7 @@ class CollegesRecommendation(BaseModel):
6. Create a comprehensive student profile summary

Always consider both quantitative metrics (GPA, test scores) and qualitative aspects
(personal growth, challenges overcome, unique perspectives).""",
(personal growth, challenges overcome, unique perspectives).""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -94,7 +94,7 @@ class CollegesRecommendation(BaseModel):
6. Track historical admission data and acceptance rates

Focus on providing accurate, comprehensive information about each institution
while considering both academic and cultural fit factors.""",
while considering both academic and cultural fit factors.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -117,7 +117,7 @@ class CollegesRecommendation(BaseModel):
6. Explain the reasoning behind each match

Always provide a balanced list with realistic expectations while
considering both student preferences and admission probability.""",
considering both student preferences and admission probability.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -140,7 +140,7 @@ class CollegesRecommendation(BaseModel):
6. Document key points of agreement and disagreement

Maintain objectivity while ensuring all important factors are thoroughly discussed
and evaluated.""",
and evaluated.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -163,7 +163,7 @@ class CollegesRecommendation(BaseModel):
6. Suggest alternative options when appropriate

Focus on constructive criticism that helps improve the final college list
while maintaining realistic expectations.""",
while maintaining realistic expectations.""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand All @@ -188,7 +188,7 @@ class CollegesRecommendation(BaseModel):

Focus on creating actionable, well-reasoned final recommendations that
balance all relevant factors and stakeholder input.
""",
""", # noqa: W291, W293
llm=model,
max_loops=1,
verbose=True,
Expand Down Expand Up @@ -227,7 +227,7 @@ class CollegesRecommendation(BaseModel):
- Extracurriculars: Robotics Club President, Math Team
- Budget: Need financial aid
- Preferred Environment: Medium-sized urban campus
"""
""" # noqa: W291, W293

# Run the comprehensive college selection analysis
result = college_selection_workflow.run(
Expand Down
2 changes: 1 addition & 1 deletion scripts/auto_tests_docs/auto_docs.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
###### VERISON2
# VERSION2
import inspect
import os
import threading
Expand Down
1 change: 0 additions & 1 deletion scripts/auto_tests_docs/docs.py
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@ def TEST_WRITER_SOP_PROMPT(

Create 5,000 lines of extensive and thorough tests for the code below using the guide, do not worry about your limits you do not have any
just write the best tests possible, the module is {module}, the file path is {path} return all of the code in one file, make sure to test all the functions and methods in the code.



######### TESTING GUIDE #############
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def is_duplicate(new_prompt, published_prompts):
def extract_use_cases(prompt):
"""Extract use cases from the prompt by chunking it into meaningful segments."""
# This is a simple placeholder; you can use a more advanced method to extract use cases
chunks = [prompt[i : i + 50] for i in range(0, len(prompt), 50)]
chunks = [prompt[i: i + 50] for i in range(0, len(prompt), 50)]
return [
{"title": f"Use case {idx+1}", "description": chunk}
for idx, chunk in enumerate(chunks)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ def is_duplicate(new_prompt, published_prompts):
def extract_use_cases(prompt):
"""Extract use cases from the prompt by chunking it into meaningful segments."""
# This is a simple placeholder; you can use a more advanced method to extract use cases
chunks = [prompt[i : i + 50] for i in range(0, len(prompt), 50)]
chunks = [prompt[i: i + 50] for i in range(0, len(prompt), 50)]
return [
{"title": f"Use case {idx+1}", "description": chunk}
for idx, chunk in enumerate(chunks)
Expand Down
Loading