aiagent package¶
Subpackages¶
Submodules¶
aiagent.agent module¶
- class AiAgent(user, options)¶
Bases:
object
The top level interface to interact with the AiAgent system.
- add_content(base_name, content, store_name=None)¶
- add_content_from_local_files(data_types, store_name=None)¶
- add_document_object(document, store_name=None)¶
- direct_query(input, should_print=False)¶
Query underlying ai systems directly with user query only. No document chunks from user stores will be used, which makes it easier for aiAgent to get responses that are based on the based knowledge of the underlying ai system.
- get_document(store_name, docid)¶
- prompt_query(prompt_id, prompt_input, store_name)¶
Query aiAgent, using a specific prompt and matching document chunks as part of the ai interactions. The prompt specified by prompt_id needs to exist already.
- query(input, store_name=None, should_print=False)¶
Query aiAGent with matching document chunks used for ai interactions. Typically responses from the ai system will be limited to knowledge provided within the context of the initial user query.
aiagent.analysisprompts module¶
aiagent.baseagentstore module¶
aiagent.basesmnetstore module¶
aiagent.docustore module¶
- class DocuStore(user_name, storage_list, config)¶
Bases:
object
Maintains details about documents maintained in user stores. It does not hold the contents of the documents. It keeps details about date added, who added it, how big the document is, how many chunks it is broken down into, etc.
- add_document(store_name, doc_chunks, metadatas)¶
Defines a new document in the docu store, but does not manage document contents.
Parameters: store_name (string): The name of the user storage that will be used to retrieve the document from
- add_document_object(store_name, document)¶
- delete_document(store_name, doc_id)¶
Will complete remove the document from the user store as well as from the agent store and semantic network store. Will also remove nodes from semantic network unless also referenced by other sources.
- document_exists(store_name, document_name)¶
- get_document(store_name, doc_id, include_content=True)¶
Gets a single document’s details from the given user store and can include the contents of the document in text format as stored in chunks in the store.
- get_documents(store_name, **kwargs)¶
provides a list of basic details about the documents stored in the storage named in store_name
- get_user_storage()¶
provides a list of user storage with details such as number of documents
- update_document_metadata(store_name, metadatas)¶
aiagent.embeddings module¶
aiagent.factories module¶
- class Factory¶
Bases:
object
Registered factory_methods create instances by string name.
Each instance of Factory is designed to create instances of objects that typically inherit from a common base class. A factory can have as many factory_method instances registered with it as needed. Client code can create instances of an object with a string name which it typically gets from a configuration option. This is good way to swap out implementations of a concrete class based on deployment and environmental factors. For example, use a RabbitMQ queue or an AWS SQS queue.
- create(name, **factory_args)¶
Create an instance of the Factory’s base type with args provided.
If the name given is not registered, an exception is thrown. The args passed must match the method signature of the factory_method
- register(name, factory_method)¶
Register a factory_method with a name.
The name provided is what clients can use to create instances of that type. factory_method (function): A function that knows how to create an instance of the needed class with the args provided during calls to create method.
- define_factory(factory_id)¶
- get_agent_store_factory()¶
- get_factory(factory_id)¶
- get_smnet_store_factory()¶
aiagent.ingestion module¶
- class Ingestion¶
Bases:
object
- calc_full_storage_name_from_name(store_name)¶
- getEmailDetails(text_splitter)¶
- loadCalendarEventPdf(source, text_splitter)¶
- load_cal_events(directory)¶
- load_emails(directory)¶
- load_text_documents(directory)¶
- parse_eml()¶
- setupHuggingFaceLocal()¶
- setup_agent_llm_chat_gpt(chunk_size=1300, chunk_overlap=0, separator='\\n\\n', request_timeout=360)¶
- to_text_document()¶
- class UnstructuredEmailStringLoader(email_text: str, mode: str = 'single', **unstructured_kwargs: Any)¶
Bases:
UnstructuredFileLoader
Loader that uses unstructured to load email from strings.
aiagent.langproc module¶
- class NaturalLanguageProcessor(llm, model=None, chat=None, embedding=None, text_splitter=None)¶
Bases:
object
aiagent.pipeline module¶
- class PipelineStage(stage_name)¶
Bases:
object
A collection of pipeline processing methods for a single stage in the ProcessingPipeline
- add_processor(processor)¶
- class ProcessingPipeline(pipeline_id, config)¶
Bases:
object
Manages a collection of PipelineStages that will be used to process input data to generate an output data.
Processing pipelines are used when user queries happen and when documents are ingested. This allows custom processing functions to be added into the processing pipeline without having to build classes that inherit from aiAgent classes to override functionality. This also allows different pipelines to be used in different runtime conditions such as certain types of users or certain companies hosting in the system.
- add_processors(stage_name, processors)¶
Add processors to a specific stage of a specific pipeline.
The processor are stored at the class level and used by instances of ProcessingPipeline to do the actual processing work.
- process(input_data, state={}, start_at_stage=None, stop_after_stage=None)¶
Start processing the input data by running through the stages of the pipeline and running through all the processor functions to generate final output.
Any type of results from running the pipeline that ned to be available to the caller should be put into the pipeline state object. The specific name used in state depends on the pipeline and processors. For query calls, state should contain a ‘user_response’ property.
- process_single_stage(state, stage, input_data)¶
aiagent.prompt module¶
- class FixedPromptTemplate(*, input_variables: List[str], output_parser: Optional[BaseOutputParser] = None, partial_variables: Mapping[str, Union[str, Callable[[], str]]] = None, template: str, template_format: str = 'f-string', validate_template: bool = True)¶
Bases:
PromptTemplate
- format(**kwargs) str ¶
Format the prompt with the inputs.
- Args:
kwargs: Any arguments to be passed to the prompt template.
- Returns:
A formatted string.
Example:
prompt.format(variable1="foo")
- input_variables: List[str]¶
A list of the names of the variables the prompt template expects.
- template: str¶
The prompt template.
- template_format: str¶
The format of the prompt template. Options are: ‘f-string’, ‘jinja2’.
- validate_template: bool¶
Whether or not to try validating the template.
aiagent.retrieval module¶
- class AgentRetrievalQA(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, combine_documents_chain: BaseCombineDocumentsChain, input_key: str = 'query', output_key: str = 'result', return_source_documents: bool = False, retriever: BaseRetriever)¶
Bases:
RetrievalQA
,BaseModel
- retriever: BaseRetriever¶
aiagent.semanticnetwork module¶
- class RabbitQueue(queueInfo)¶
Bases:
object
- add_listener(callback, autoAck=False)¶
- listen_forever()¶
- push(body)¶
- push_document_chunk(chunk, metadata, user_name)¶
Creates a queue message for the chunk for a worker to process
Params: chunk (string): This is the text of the chunk NOT a Document object
- stop_listening()¶
- successfully_processed(msgKey)¶
- class SemanticNetwork(agent_llm, user=None, options=None)¶
Bases:
object
- add_content(document, metadata)¶
- query(input, responseFormat='text')¶
- class SemanticNetworkQueue(llm, user=None, options=None)¶
Bases:
object
- add_content(chunks, metadatas)¶
- add_content_object(document)¶
- listen_to_queue_until_stopped()¶
- query(input, responseFormat='text')¶
- stop_listening_to_queue()¶
- json_formatter(docs)¶
- text_formatter(docs)¶
aiagent.standard_processors module¶
- format_user_response_pipeline_processor(input_data, state, config)¶
- query_llm_pipeline_processor(input_data, state, config)¶
- query_smnet_pipeline_processor(input_data, state, config)¶
- setup_llm_storage_pipeline_processor(input_data, state, config)¶
- setup_standard_pipeline_processors()¶
aiagent.user module¶
- class AiUser(user_name, predefined_questions=[])¶
Bases:
object
Module contents¶
- class AgentRetrievalQA(*, memory: Optional[BaseMemory] = None, callbacks: Optional[Union[List[BaseCallbackHandler], BaseCallbackManager]] = None, callback_manager: Optional[BaseCallbackManager] = None, verbose: bool = None, tags: Optional[List[str]] = None, combine_documents_chain: BaseCombineDocumentsChain, input_key: str = 'query', output_key: str = 'result', return_source_documents: bool = False, retriever: BaseRetriever)¶
Bases:
RetrievalQA
,BaseModel
- callback_manager: Optional[BaseCallbackManager]¶
Deprecated, use callbacks instead.
- callbacks: Callbacks¶
Optional list of callback handlers (or callback manager). Defaults to None. Callback handlers are called throughout the lifecycle of a call to a chain, starting with on_chain_start, ending with on_chain_end or on_chain_error. Each custom chain can optionally call additional callback methods, see Callback docs for full details.
- combine_documents_chain: BaseCombineDocumentsChain¶
Chain to use to combine the documents.
- memory: Optional[BaseMemory]¶
Optional memory object. Defaults to None. Memory is a class that gets called at the start and at the end of every chain. At the start, memory loads variables and passes them along in the chain. At the end, it saves any returned variables. There are many different types of memory - please see memory docs for the full catalog.
- retriever: BaseRetriever¶
- return_source_documents: bool¶
Return the source documents.
- tags: Optional[List[str]]¶
Optional list of tags associated with the chain. Defaults to None These tags will be associated with each call to this chain, and passed as arguments to the handlers defined in callbacks. You can use these to eg identify a specific instance of a chain with its use case.
- verbose: bool¶
Whether or not run in verbose mode. In verbose mode, some intermediate logs will be printed to the console. Defaults to langchain.verbose value.
- class AiAgent(user, options)¶
Bases:
object
The top level interface to interact with the AiAgent system.
- add_content(base_name, content, store_name=None)¶
- add_content_from_local_files(data_types, store_name=None)¶
- add_document_object(document, store_name=None)¶
- direct_query(input, should_print=False)¶
Query underlying ai systems directly with user query only. No document chunks from user stores will be used, which makes it easier for aiAgent to get responses that are based on the based knowledge of the underlying ai system.
- get_document(store_name, docid)¶
- prompt_query(prompt_id, prompt_input, store_name)¶
Query aiAgent, using a specific prompt and matching document chunks as part of the ai interactions. The prompt specified by prompt_id needs to exist already.
- query(input, store_name=None, should_print=False)¶
Query aiAGent with matching document chunks used for ai interactions. Typically responses from the ai system will be limited to knowledge provided within the context of the initial user query.
- class AiUser(user_name, predefined_questions=[])¶
Bases:
object
- class BaseAgentStore¶
Bases:
object
- abstract add_content(base_name, content)¶
- abstract direct_query(input)¶
- abstract query(input)¶
- class BaseSemanticNetworkStore(config, user_name)¶
Bases:
object
- add_relationships(relationships, metadata)¶
- query(query)¶
- class DocuStore(user_name, storage_list, config)¶
Bases:
object
Maintains details about documents maintained in user stores. It does not hold the contents of the documents. It keeps details about date added, who added it, how big the document is, how many chunks it is broken down into, etc.
- add_document(store_name, doc_chunks, metadatas)¶
Defines a new document in the docu store, but does not manage document contents.
Parameters: store_name (string): The name of the user storage that will be used to retrieve the document from
- add_document_object(store_name, document)¶
- delete_document(store_name, doc_id)¶
Will complete remove the document from the user store as well as from the agent store and semantic network store. Will also remove nodes from semantic network unless also referenced by other sources.
- document_exists(store_name, document_name)¶
- get_document(store_name, doc_id, include_content=True)¶
Gets a single document’s details from the given user store and can include the contents of the document in text format as stored in chunks in the store.
- get_documents(store_name, **kwargs)¶
provides a list of basic details about the documents stored in the storage named in store_name
- get_user_storage()¶
provides a list of user storage with details such as number of documents
- update_document_metadata(store_name, metadatas)¶
- class ElasticSearchSemanticNetworkStore(options, user, mappings, embedder)¶
Bases:
object
- add_relationships(relationships, metadata)¶
- add_relationships_v2(relationships, metadata)¶
- check_for_index_exist(store_name=None)¶
- check_item_exists(source)¶
- get_es()¶
- class Factory¶
Bases:
object
Registered factory_methods create instances by string name.
Each instance of Factory is designed to create instances of objects that typically inherit from a common base class. A factory can have as many factory_method instances registered with it as needed. Client code can create instances of an object with a string name which it typically gets from a configuration option. This is good way to swap out implementations of a concrete class based on deployment and environmental factors. For example, use a RabbitMQ queue or an AWS SQS queue.
- create(name, **factory_args)¶
Create an instance of the Factory’s base type with args provided.
If the name given is not registered, an exception is thrown. The args passed must match the method signature of the factory_method
- register(name, factory_method)¶
Register a factory_method with a name.
The name provided is what clients can use to create instances of that type. factory_method (function): A function that knows how to create an instance of the needed class with the args provided during calls to create method.
- class Ingestion¶
Bases:
object
- calc_full_storage_name_from_name(store_name)¶
- getEmailDetails(text_splitter)¶
- loadCalendarEventPdf(source, text_splitter)¶
- load_cal_events(directory)¶
- load_emails(directory)¶
- load_text_documents(directory)¶
- parse_eml()¶
- setupHuggingFaceLocal()¶
- setup_agent_llm_chat_gpt(chunk_size=1300, chunk_overlap=0, separator='\\n\\n', request_timeout=360)¶
- to_text_document()¶
- class NaturalLanguageProcessor(llm, model=None, chat=None, embedding=None, text_splitter=None)¶
Bases:
object
- class NounList(*, items: List[NounRelationship])¶
Bases:
BaseModel
- items: List[NounRelationship]¶
- class NounRelationship(*, relationship_name: str, n1: str, n1_type: str, n2: str, n2_type: str, factoids: List[str])¶
Bases:
BaseModel
- factoids: List[str]¶
- n1: str¶
- n1_type: str¶
- n2: str¶
- n2_type: str¶
- relationship_name: str¶
- class PipelineStage(stage_name)¶
Bases:
object
A collection of pipeline processing methods for a single stage in the ProcessingPipeline
- add_processor(processor)¶
- class PostgresSemanticNetworkStore(config, user_name)¶
Bases:
BaseSemanticNetworkStore
- add_relationships(relationships, metadata)¶
- add_relationships_v2(relationships, metadata)¶
- query(query)¶
- class ProcessingPipeline(pipeline_id, config)¶
Bases:
object
Manages a collection of PipelineStages that will be used to process input data to generate an output data.
Processing pipelines are used when user queries happen and when documents are ingested. This allows custom processing functions to be added into the processing pipeline without having to build classes that inherit from aiAgent classes to override functionality. This also allows different pipelines to be used in different runtime conditions such as certain types of users or certain companies hosting in the system.
- add_processors(stage_name, processors)¶
Add processors to a specific stage of a specific pipeline.
The processor are stored at the class level and used by instances of ProcessingPipeline to do the actual processing work.
- process(input_data, state={}, start_at_stage=None, stop_after_stage=None)¶
Start processing the input data by running through the stages of the pipeline and running through all the processor functions to generate final output.
Any type of results from running the pipeline that ned to be available to the caller should be put into the pipeline state object. The specific name used in state depends on the pipeline and processors. For query calls, state should contain a ‘user_response’ property.
- process_single_stage(state, stage, input_data)¶
- class RabbitQueue(queueInfo)¶
Bases:
object
- add_listener(callback, autoAck=False)¶
- listen_forever()¶
- push(body)¶
- push_document_chunk(chunk, metadata, user_name)¶
Creates a queue message for the chunk for a worker to process
Params: chunk (string): This is the text of the chunk NOT a Document object
- stop_listening()¶
- successfully_processed(msgKey)¶
- class SemanticNetwork(agent_llm, user=None, options=None)¶
Bases:
object
- add_content(document, metadata)¶
- query(input, responseFormat='text')¶
- class SemanticNetworkQueue(llm, user=None, options=None)¶
Bases:
object
- add_content(chunks, metadatas)¶
- add_content_object(document)¶
- listen_to_queue_until_stopped()¶
- query(input, responseFormat='text')¶
- stop_listening_to_queue()¶
- define_factory(factory_id)¶
- get_agent_store_factory()¶
- get_factory(factory_id)¶
- get_smnet_store_factory()¶