先下载ollama
在分别执行下面的命令
ollama run mxbai-embed-large
ollama run deepseek-r1
然后执行下面的代码:
点击查看全文
import ollama from langchain_community.vectorstores import FAISS import numpy as np # Step 1: Generate embeddings using ollama model = 'mxbai-embed-large' prompt = '自然语言' # Generate the embedding for the query sentence query_embedding = ollama.embeddings(model=model, prompt=prompt) # Ensure the embedding is a numpy array query_embedding = np.array(query_embedding['embedding']) # Access the 'embedding' key if it's a dictionary # Step 2: Create a FAISS vector store and add some example embeddings # Example sentences (you can replace these with your own data) sentences = [ "自处理人类语言。NLP涵盖了从文本分析、机器翻译到情感分析等多个任务。在文本分析中,NLP通过词汇分析、句法分析和语义理解等技术,将非结构化的自然语言转换为机器可以处理的形式。机器翻译则利用深度学习模型,如Transformer架构,实现不同语言之间的自动转换。情感分析则通过分析文本中的情感词汇和句子结构,判断文本的情感倾向。这些技术不仅提高了人机交互的效率,还在医疗、金融等多个领域发挥了重要作用。深度学习:推动NLP发展的核心技术然语言处理:人工智能的关键领域", "Rayleigh scattering causes the blue color of the sky.", "The ocean is blue because it reflects the sky.", "Rainbows are formed by the refraction of light.", "Rayleigh scattering is more effective for shorter wavelengths like blue." ] # Generate embeddings for the example sentences sentence_embeddings = [] for sentence in sentences: embedding = ollama.embeddings(model=model, prompt=sentence) sentence_embeddings.append(np.array(embedding['embedding'])) # Access the 'embedding' key if it's a dictionary # Create a FAISS index dimension = len(query_embedding) # Now this will work faiss_index = FAISS.from_embeddings( list(zip(sentences, sentence_embeddings)), # Pair sentences with their embeddings embedding=np.zeros(dimension) # Dummy embedding function (not used directly) ) # Step 3: Perform an approximate nearest neighbor search # Query the FAISS index with the query embedding distances, indices = faiss_index.index.search(np.array([query_embedding]), k=3) # Find top 3 matches # Display the results print("Top matching sentences:") for i, idx in enumerate(indices[0]): print(f"{i + 1}: {sentences[idx]} (Distance: {distances[0][i]})")最后请求ollma中的deepseek进行汇总回答即可
网友回复
threejs如何引入中文字体json?
FLUX.1 Kontext如何api调用?
腾讯混元模型广场里都是混元模型的垂直小模型,如何api调用?
为啥所有的照片分辨率提升工具都会修改照片上的图案细节?
js如何在浏览器中将webm视频的声音分离为单独音频?
微信小程序如何播放第三方域名url的mp4视频?
ai多模态大模型能实时识别视频中的手语为文字吗?
如何远程调试别人的chrome浏览器获取调试信息?
为啥js打开新网页window.open设置窗口宽高无效?
浏览器中js的navigator.mediaDevices.getDisplayMedia屏幕录像无法录制SpeechSynthesisUtterance产生的说话声音?