0 Yorumlar
0 hisse senetleri
51 Views
Rehber
Rehber
-
Please log in to like, share and comment!
-
VENTUREBEAT.COMIs that really your boss calling? Jericho Security raises $15M to stop deepfake fraud that’s cost businesses $200M in 2025 alonePentagon-backed Jericho Security raises $15 million to combat deepfake fraud that has already cost North American businesses $200 million in 2025, using AI to detect increasingly convincing voice and video impersonations.Read More0 Yorumlar 0 hisse senetleri 53 Views
-
VENTUREBEAT.COMIs that really your boss calling? Jericho Security raises $15M to stop deepfake fraud that’s cost businesses $200M in 2025 alonePentagon-backed Jericho Security raises $15 million to combat deepfake fraud that has already cost North American businesses $200 million in 2025, using AI to detect increasingly convincing voice and video impersonations.Read More0 Yorumlar 0 hisse senetleri 53 Views
-
TOWARDSDATASCIENCE.COMGovernment Funding Graph RAGIn this article, I present my latest open-source project — Government Funding Graph. The inspiration for this project came from a desire to make better tooling for grant writing, namely to suggest research topics, funding bodies, research institutions, and researchers. I have made Innovate UK grant applications in the past, so I have had an interest in the government funding landscape for some time. Concretely, a lot of the recent political discourse focuses on government spending, namely Elon Musk’s Department of Government Efficiency (DOGE) in the United States and similar sentiments echoed here in the UK, as Kier Starmer looks to integrate AI into government. Perhaps the release of this project is quite timely. Albeit not the original intention, I hope as a secondary outcome of this article is that it inspires more exploration into open source datasets for public spending. Government Funding Graph (Image by author) I have used Networkx & PyVis to visualise the graph of UKRI API data. Then, I detail a LlamaIndex graph RAG implementation. For completeness, I have also included my initial LangChain-based solution. The web framework is Streamlit, the demo is hosted on Streamlit community cloud. This article contains the following sections. Definitions UKRI API Construct NetworkX Graph Filter a NetworkX Graph Graph Visualisation Using PyVis Graph RAG Using LlamaIndex Linting With Pylint Streamlit Community Cloud Demo App (at the very end of the article) 1. Definitions What is UKRI? UK Research and Innovation is a non-departmental public body sponsored by the Department for Science, Innovation and Technology (DSIT) that allocates funding for research and development. Generally, funding is awarded to research institutions and businesses. “We invest £8 billion of taxpayers’ money each year into research and innovation and the people who make it happen. We work across a huge range of fields — from biodiversity conservation to quantum computing, and from space telescopes to innovative health care. We give everyone the opportunity to contribute and to benefit, bringing together people and organisations nationally and globally to create, develop and deploy new ideas and technologies.” — UKRI Website What is a Graph? A graph is a convenient data structure showing the relationships between different entities (nodes) and their relationships to each other (edges). In some instances, we also associate those relationships with a numerical value. “In computer science, a graph is an abstract data type that is meant to implement the undirected graph and directed graph concepts from the field of graph theory within mathematics. A graph data structure consists of a finite (and possibly mutable) set of vertices (also called nodes or points), together with a set of unordered pairs of these vertices for an undirected graph or a set of ordered pairs for a directed graph. These pairs are known as edges (also called links or lines), and for a directed graph are also known as edges but also sometimes arrows or arcs.” — Wikipedia Government Funding Graph (Image By Author) What is NetworkX? NetworkX is a useful library in this project to construct and store our graph. Specifically, a digraph though the library supports many graph variants such as multigraphs, the library also supports graph-related utility functions. “NetworkX is a Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks.” — NetworkX Website What is PyVis? We use the PyVis Python package to create dynamic network views for our graph, screenshots of these can be found throughout the article. “The pyvis library is meant for quick generation of visual network graphs with minimal python code. It is designed as a wrapper around the popular Javascript visJS library” — PyVis Docs What is LlamaIndex? LlamaIndex is a popular library for LLM applications, including support for agentic workflows, we use it to perform the graph RAG component of this project. “LlamaIndex (GPT Index) is a data framework for your LLM application. Building with LlamaIndex typically involves working with LlamaIndex core and a chosen set of integrations (or plugins).” — LlamaIndex Github What is Graph RAG? Graph RAG at a high level (Image By Author) Retrieval-augmented generation, or RAG as it is commonly known, is an AI framework for which additional context from an external knowledge base is used to ground LLM answers. Graph RAG, by extension, pertains to the use of a Graph to provide this additional context. “GraphRAG is a powerful retrieval mechanism that improves GenAI applications by taking advantage of the rich context in graph data structures… Basic RAG systems rely solely on semantic search in vector databases to retrieve and rank sets of isolated text fragments. While this approach can surface some relevant information, it fails to capture the context connecting these pieces. For this reason, basic RAG systems are ill-equipped to answer complex, multi-hop questions. This is where GraphRAG comes in. It uses knowledge graphs to represent and connect information to capture not only more data points but also their relationships. Thus, graph-based retrievers can provide more accurate and relevant results by uncovering hidden connections that aren’t often obvious but are crucial for correlating information.” — Neo4j Website What is Streamlit? Streamlit is a lightweight Python web framework we will use to create the web application for this project. “Streamlit is an open-source Python framework for data scientists and AI/ML engineers to deliver dynamic data apps with only a few lines of code. Build and deploy powerful data apps in minutes.” — Streamlit website 2. UKRI API The UKRI API is a service that facilitates access to the public UKRI grant funding dataset, authentication is not required and the docs can be found here. I use only two endpoints for our application, they are the Search projects endpoint and the Projects endpoint. This allows a user to search for projects based on a keyword search and retrieve all project-specific information. A search term, page size and page number are provided as query string parameters. The query string parameters; selectedSortableField=pro.am&selectedSortOrder=DESC Ensure that the results are returned by funded value descending. I have also included the code I used for asynchronous pagination. import math import requests import concurrent.futures import os from itertools import chain import urllib.parse import logging def search_ukri_projects(args): """ Search UKRI projects based on a search term page size and page number. More details can be found here: https://gtr.ukri.org/resources/api.html """ search_term, page_size, page_number = args try: encoded_search_term = urllib.parse.quote(search_term) if ( ( response := requests.get( f"https://gtr.ukri.org/api/search/project?term={encoded_search_term}&page={page_number}&fetchSize={page_size}&selectedSortableField=pro.am&selectedSortOrder=DESC&selectedFacets=&fields=project.abs", timeout=10, ) ) and (response.status_code == 200) and ( items := response.json() .get("facetedSearchResultBean", {}) .get("results") ) ): return items except Exception as error: logging.exception("ERROR search_ukri_projects: %s", error) return [] def search_ukri_paginate(search_term, number_of_results, page_size=100): """ Asynchronous pagination requests for project lookup. """ args = [ (search_term, page_size, page_number + 1) for page_number in range(int(math.ceil(number_of_results / page_size))) ] with concurrent.futures.ThreadPoolExecutor(os.cpu_count()) as executor: future = executor.map(search_ukri_projects, args) results = [result for result in future if result] return list(chain.from_iterable(results))[:number_of_results] The following function is used to get project-specific data using the unique UKRI project reference. The project reference is derived from the aforementioned project search results. import requests import logging def get_ukri_project_data(project_grant_reference): """ Search UKRI project data based on grant reference. """ try: if ( ( response := requests.get( f"https://gtr.ukri.org/api/projects?ref={project_grant_reference}", timeout=10, ) ) and (response.status_code == 200) and (items := response.json().get("projectOverview", {})) ): return items except Exception as error: logging.exception("ERROR get_ukri_project_data: %s", error) Similarly, we parse out the relevant data for the construction of the graph and remove superfluous information. def parse_data(projects): """ Parse project data into a usable format and validate. """ data = [] for project in projects: project_composition = project.get("projectComposition", {}) project_data = project_composition.get("project", {}) fund = project_data.get("fund", {}) funder = fund.get("funder") value_pounds = fund.get("valuePounds") lead_research_organisation = project_composition.get("leadResearchOrganisation") person_roles = project_composition.get("personRoles") if all( [ project_composition, project_data, fund, funder, value_pounds, lead_research_organisation, ] ): record = {} record["funder_name"] = funder.get("name") record["funder_link"] = funder.get("resourceUrl") record["project_title"] = project_data.get("title") record["project_grant_reference"] = project_data.get("grantReference") record["value"] = value_pounds record["lead_research_organisation"] = lead_research_organisation.get( "name", "" ) record["lead_research_organisation_link"] = lead_research_organisation.get( "resourceUrl", "" ) record["people"] = person_roles record["project_url"] = project_data.get("resourceUrl") data.append(record) return data 3. Construct NetworkX Graph There are different types of graphs, and I elected for a directed graph where the direction of the edges are important. More formally; “A DiGraph stores nodes and edges with optional data, or attributes. DiGraphs hold directed edges. Self loops are allowed but multiple (parallel) edges are not.” — NetworkX Website To construct the NetworkX graph, we must add nodes and edges — including the sequential updating of node attributes. The standard attributes, compatible with PyVis graph rendering for nodes are as follows; Title (The label that appears on hover over) Group (The colour coding) Size (How large the nodes appear in the graph) We also use the custom attribute “funding”, which we will use to sum all of the funding for research and funding organizations. This will be normalized to set the node size according to the percentage of total funding for a particular group. For our graph, we have nodes from four groups. They are classified as: funder_name, lead_research_organisation, project_title and person_name. HTML links can be used in the node title to allow the user to easily click through to a URL. I have included a helper function to do this below. There are project, people, and research organisation-specific links that, if redirected to provide additional information to the user. Government Funding Graph (Image By Author) The code to construct the NetworkX graph can be seen below. The DiGraph class has methods to check if a graph already has a node and similarly for edges. There are also methods for adding nodes and edges. As we iterate through projects, we want to sum the total funding amount for the funding organization and lead research institution. There are methods to both get an attribute from a node in the graph and set an attribute on a node. Depending on the source and destination node, we also apply different titles and labels to reflect that specific predicate. These can be seen in the code below. import networkx as nx def get_link_html(link, text): """ Helper function to construct a HTML link. """ return f"""<a href="{link}" target="_blank">{text}</a>""" def set_networkx_attribute(graph, node_label, attribute_name, value): """ Helper to set attribute for networkx graph. """ attrs = {node_label: {attribute_name: value}} nx.set_node_attributes(graph, attrs) def append_networkx_value(graph, node_label, attribute_name, value): """ Helper to append value to current node attribute scalar value. """ current_map = nx.get_node_attributes(graph, attribute_name, default=0) current_value = current_map[node_label] current_value = current_value + value set_networkx_attribute(graph, node_label, attribute_name, current_value) def create_networkx(data): """ Create networkx graph from UKRI data. """ graph = nx.DiGraph() for row in data: if ( (funder_name := row.get("funder_name")) and (project_title := row.get("project_title")) and (lead_research_organisation := row.get("lead_research_organisation")) ): project_data_lookup = row.get("project_data_lookup", {}) if not graph.has_node(funder_name): graph.add_node( funder_name, title=funder_name, group="funder_name", size=100 ) if not graph.has_node(project_title): link_html = get_link_html( row.get("project_url", "").replace("api/", ""), project_title ) graph.add_node( project_title, title=link_html, group="project_title", project_data_lookup=project_data_lookup, size=25, ) if not graph.has_edge(funder_name, project_title): graph.add_edge( funder_name, project_title, value=row.get("value"), title=f"{'£{:,.2f}'.format(row.get('value'))}", label=f"{'£{:,.2f}'.format(row.get('value'))}", ) if not graph.has_node(lead_research_organisation): link_html = get_link_html( row.get("lead_research_organisation_link").replace("api/", ""), lead_research_organisation, ) graph.add_node( lead_research_organisation, title=link_html, group="lead_research_organisation", size=50, ) if not graph.has_edge(lead_research_organisation, project_title): graph.add_edge( lead_research_organisation, project_title, title="RELATES TO" ) append_networkx_value(graph, funder_name, "funding", row.get("value", 0)) append_networkx_value(graph, project_title, "funding", row.get("value", 0)) append_networkx_value( graph, lead_research_organisation, "funding", row.get("value", 0) ) person_roles = row.get( "people", [] ) for person in person_roles: if ( (person_name := person.get("fullName")) and (person_link := person.get("resourceUrl")) and (project_title := row.get("project_title")) and (roles := person.get("roles")) ): if not graph.has_node(person_name): link_html = get_link_html( person_link.replace("api/", ""), person_name ) graph.add_node( person_name, title=link_html, group="person_name", size=10 ) for role in roles: if (not graph.has_edge(person_name, project_title)) or ( not graph[person_name][project_title]["title"] == role.get("name") ): graph.add_edge( person_name, project_title, title=role.get("name"), label=role.get("name"), ) return graph Once the graph has been constructed and as previously described, I wanted to normalize the node sizes depending on the percentage of the total amount of funding for particular groups. I also append the total funding, both as a summation and as a percentage to the node label so it can be more easily viewed by a user. The scale factor is just a multiple applied for aesthetic reasons, such that the node sizes appear relative to the other node groups present. import networkx as nx import math import utils.config as config # pylint: disable=consider-using-from-import, import-error def set_networkx_attribute(graph, node_label, attribute_name, value): """ Helper to set attribute for networkx graph. """ attrs = {node_label: {attribute_name: value}} nx.set_node_attributes(graph, attrs) def calculate_total_funding_from_group(graph, group): """ Helper to calculate total funding for a group. """ return sum( [ data.get("funding") for node_label, data in graph.nodes(data=True) if data.get("funding") and data.get("group") == group ] ) def set_weighted_size_helper(graph, node_label, totals, data): """ Create normalized weights based on percentage funding amount. """ if ( (group := data.get("group")) and (total_funding := totals.get(group)) and (funding := data.get("funding")) ): div = funding / total_funding funding_percentage = math.ceil(((100.0 * div))) set_networkx_attribute(graph, node_label, "size", funding_percentage) def annotate_value_on_graph(graph): """ Calculate normalized graph sizes and append to title. """ totals = {} for group in ["lead_research_organisation", "funder_name"]: totals[group] = calculate_total_funding_from_group(graph, group) for node_label, data in graph.nodes(data=True): if ( (funding := data.get("funding")) and (group := data.get("group")) and (title := data.get("title")) ): new_title = f"{title} | {'£ {:,.0f}'.format(funding)}" if total_funding := totals.get(group): div = funding / total_funding funding_percentage = math.ceil(((100.0 * div))) set_networkx_attribute( graph, node_label, "size", config.NODE_SIZE_SCALE_FACTOR * funding_percentage, ) new_title += f" | {' {:,.0f}'.format(funding_percentage)} %" set_networkx_attribute(graph, node_label, "title", new_title) 4. Filter a NetworkX Graph Government Funding Graph UI (Image By Author) I allow the user to filter nodes via the UI to create a subgraph. The form to do this in Streamlit is below. I also find the neighbors of neighbors for the filtered nodes. I had some issues with Pylint raising unnecessary comprehension errors from the generator, which I have disabled — more on Pylint later in the article. A smaller graph will take less time to render and will ensure that irrelevant context will be excluded. import networkx as nx import streamlit as st def find_neighbor_nodes_helper(node_list, graph): """ Find unique node neighbors and flatten. """ successors_generator_array = [ # pylint: disable=unnecessary-comprehension [item for item in graph.successors(node)] for node in node_list ] predecessors_generator_array = [ # pylint: disable=unnecessary-comprehension [item for item in graph.predecessors(node)] for node in node_list ] neighbors = successors_generator_array + predecessors_generator_array flat = sum(neighbors, []) return list(set(flat)) def render_filter_form(annotated_node_data, graph): """ Render form to allow the user to define search nodes. """ st.session_state["filter"] = st.radio( "Filter", ["No filter", "Filter results"], index=0, horizontal=True ) if (filter_determinant := st.session_state.get("filter")) and ( filter_determinant == "Filter results" ): st.session_state["node_group"] = st.selectbox( "Entity type", list(annotated_node_data.keys()) ) if node_group := st.session_state.get("node_group"): ordered_lookup = dict( sorted( annotated_node_data[node_group].items(), key=lambda item: item[1].get("neighbor_len"), reverse=True, ) ) st.session_state["search_nodes_label"] = st.multiselect( "Filter projects", list(ordered_lookup.keys()) ) if search_nodes_label := st.session_state.get("search_nodes_label"): filter_nodes = [ ordered_lookup[label].get("label") for label in search_nodes_label ] search_nodes_neighbors = find_neighbor_nodes_helper(filter_nodes, graph) search_nodes = find_neighbor_nodes_helper(search_nodes_neighbors, graph) st.session_state["search_nodes"] = list( set(search_nodes + filter_nodes + search_nodes_neighbors) ) NetworkX makes it easy to create a subgraph from a list of nodes with the subgraph_view function, which takes a callable as a parameter. The callable takes a graph node as a parameter and if the boolean True value is returned, the node would be included in the subgraph. import networkx as nx import streamlit as st def filter_node(node): """ Check to see if the filter term is in the nodes selected. """ if ( (filter_term := st.session_state.get("filter")) and (filter_term == "Filter results") and (search_nodes := st.session_state.get("search_nodes")) ): if node not in search_nodes: return False return True graph = nx.subgraph_view(graph, filter_node=filter_node) 5. Graph Visualisation Using PyVis To produce the visualizations I have presented earlier in the article, we must first convert the NetworkX graph to a PyVis network and then render the HTML file within the Streamlit UI. If you are unfamiliar with Streamlit, you can see one of my other articles that explore the topic here. Converting a NetworkX graph to PyVis format is relatively trivial and can be achieved with the code below. The Network class is the main class for visualization functionality, first we instantiate the class and in this example, the graph is directed. The barnes_hut method is then called, which is a gravity model. The from_nx method takes an existing NetworkX graph as an argument and translates it to PyVis, which is called in place. from pyvis.network import Network def convert_graph(graph): """ Convert networkx to pyvis graph. """ net = Network( height="700px", width="100%", bgcolor="#222222", font_color="white", directed=True, ) net.barnes_hut() net.from_nx(graph) return net To render the Graph to the UI, we first create a unique user ID as we use the PyVis save_graph method to save the HTML file for the graph on the server. The uuid ensures a unique file name, which is then read into the streamlit UI and after the file is deleted. import uuid import contextlib import os import streamlit as st def render_graphs(net): """ Helper to render graph visualization from pyvis graph. """ uuid4 = uuid.uuid4() file_name = f"./output/{uuid4}.html" with contextlib.suppress(FileNotFoundError): os.remove(file_name) net.save_graph(file_name) with open(file_name, "r", encoding="utf-8") as html_file: source_code = html_file.read() st.components.v1.html(source_code, height=650, width=650) os.remove(file_name) 6. Graph RAG Using LlamaIndex Government Funding Graph RAG (Image By Author) Through graph retrieval-augmented generation, we can query our graph data directly, an example can be seen in the prior screenshot. Extracted entities from the user query are looked up in the graph to give specific context to the AI to ground its response, as this information would likely not have been in the training corpus, and hence any answer given would have had an increased likelihood of being a hallucination. We create a chat engine to pass a user’s previous query history into the model. Usually, the Open AI API key is read as an environment variable within LlamaIndex — however, since this is user-submitted for our application and we don’t want to save users’ Open AI credentials, we need to pass credentials to the LLM and embedding model classes as keyword arguments. We then create an empty LlamaIndex Knowledge Graph Index and populate the knowledge graph by inserting triples. The triples come from traversing the edges of our NetworkX graph and calling the upsert_triplet_and_node method, which will create the triple and node if they don’t already exist. Since the graph is directed, we can interchange the subjects and objects so that the graph is traversable in either direction. The chat engine uses the tree_summarize option for the response builder. “Tree summarize response builder. This response builder recursively merges text chunks and summarizes them in a bottom-up fashion (i.e. building a tree from leaves to root). More concretely, at each recursively step: 1. we repack the text chunks so that each chunk fills the context window of the LLM 2. if there is only one chunk, we give the final response 3. otherwise, we summarize each chunk and recursively summarize the summaries.”— LlamaIndex Website Calling the chat method with the user’s query and constructing the chat history from the Streamlit state object is included here. from llama_index.core import KnowledgeGraphIndex from llama_index.core.schema import TextNode from llama_index.embeddings.openai import OpenAIEmbedding from llama_index.llms.openai import OpenAI from llama_index.core.llms import ChatMessage, MessageRole import streamlit as st import utils.ui_utils as ui_utils # pylint: disable=consider-using-from-import, import-error def init_llama_index_graph(graph_nx, open_ai_api_key): """ Construct a knowledge graph using llama index. """ llm = OpenAI(model="gpt-3.5-turbo", api_key=open_ai_api_key) embed_model = OpenAIEmbedding(api_key=open_ai_api_key) graph = KnowledgeGraphIndex( [], llm=llm, embed_model=embed_model, api_key=open_ai_api_key ) for subject_entity, object_entity in graph_nx.edges(): predicate = graph_nx[subject_entity][object_entity].get("label", "relates to") graph.upsert_triplet_and_node( (subject_entity, predicate, object_entity), TextNode(text=subject_entity) ) graph.upsert_triplet_and_node( (object_entity, predicate, subject_entity), TextNode(text=subject_entity) ) chat_engine = graph.as_chat_engine( include_text=True, response_mode="tree_summarize", embedding_mode="hybrid", similarity_top_k=5, verbose=True, llm=llm, ) return chat_engine def add_result_to_state(question, response): """ Add model output to state. """ if response: graph_answers = st.session_state.get("graph_answers") or [] graph_answers.append((question, response)) st.session_state["graph_answers"] = graph_answers else: st.error("Query failed, please try again later.", icon="") def query_llama_index_graph(query_engine, question): """ Query llama index knowledge graph using graph RAG. """ graph_answers = st.session_state.get("graph_answers", []) chat_history = [] for query, answer in graph_answers: chat_history.append(ChatMessage(role=MessageRole.USER, content=query)) chat_history.append( ChatMessage(role=MessageRole.ASSISTANT, content=answer) ) if response := query_engine.chat(question, chat_history): add_result_to_state(question, response.response) Similarly, I initially explored a LangChain implementation, though during some experimentation, I decided to continue wth the LlamaIndex-based approach previously demonstrated. For reference, I have included this below if it is useful to you. In the interest of brevity, the explanation is omitted, though it should be self-explanatory for the reader. from langchain_community.chains.graph_qa.base import GraphQAChain from langchain_community.graphs import NetworkxEntityGraph from langchain_community.graphs.networkx_graph import KnowledgeTriple from langchain_openai import ChatOpenAI import streamlit as st def add_result_to_state(question, response): """ Add model output to state. """ if response: graph_answers = st.session_state.get("graph_answers") or [] graph_answers.append((question, response)) st.session_state["graph_answers"] = graph_answers else: st.error("Query failed, please try again later.", icon="") def construct_graph_langchain(graph_nx, open_ai_api_key, question): """ Construct a knowledge graph in Langchain and preform graph RAG. """ graph = NetworkxEntityGraph() for node in graph_nx: graph.add_node(node) for subject_entity, object_entity in graph_nx.edges(): predicate = graph_nx[subject_entity][object_entity].get("label", "relates to") graph.add_triple(KnowledgeTriple(subject_entity, predicate, object_entity)) llm = ChatOpenAI( api_key=open_ai_api_key, model="gpt-4", temperature=0, max_retries=2 ) chain = GraphQAChain.from_llm(llm=llm, graph=graph, verbose=True) if response := chain.invoke({"query": question}): answer = response.get("result") add_result_to_state(question, answer) 7. Linting With Pylint Government Funding Graph (Image By Author) Since I have left some comments in the code to disable the linter in the examples above (examples are referenced from the GitHub repo), I thought I’d cover the topic of linting briefly. For those unfamiliar, linting helps to check your code for potential bugs and stylistic issues. Linters automatically enforce coding standards. To get started, install Pylint by running the command. pip install pylint Secondly, we need to create a .pylintrc file at the root of the project (we can also set default global and user-specific settings depending on where we create the .pylintrc file). To do this, you will need to run. pylint --generate-rcfile > .pylintrc We can configure this file to fit our preferences by updating the default values within the .pylintrc file. To run the linter manually, you can use. pylint ./main.py && pylint ./**/*.py When the Docker image is built, it will automatically run Pylint and raise an error should it detect an issue with the code. This can be seen in the Dockerfile. FROM python:3.10.16 AS base WORKDIR /app COPY requirements.txt . RUN pip install --upgrade pip RUN pip install -r requirements.txt COPY . . RUN mkdir -p /app/output RUN pylint ./main.py && pylint ./**/*.py RUN python -m unittest -v tests.test_ukri_utils.Testing CMD ["streamlit", "run", "./main.py"] A popular formatter that you might also find useful is Black — “Black is a PEP 8 compliant opinionated formatter. Black reformats entire files in place.” Running Black will automatically resolve some of the issues that would be raised by the linter. 8. Streamlit Community Cloud Demo App With Streamlit Community Cloud, anyone can host their application for free. If you have an application you’d like to deploy, you can follow this tutorial. To see the hosted demo, please click the link below. https://governmentfundinggraph.streamlit.app Thanks for reading my article — as promised, you can find all the code in the GitHub repo here. Any and all feedback is valuable to me as it provides direction for my future projects. If you found this article useful, please let me know. You can also find me over on LinkedIn if you have specific questions. Interested in open-source AI grant writing projects? Sign up for our mailing list here. *All images, unless otherwise noted, are by the author. References https://medium.com/@haiyangli_38602/make-knowledge-graph-rag-with-llamaindex-from-own-obsidian-notes-b20a350fa354 https://medium.com/data-science-in-your-pocket/graphrag-using-langchain-31b1ef8328b9 The post Government Funding Graph RAG appeared first on Towards Data Science.0 Yorumlar 0 hisse senetleri 50 Views
-
WWW.GAMESPOT.COMEA Sports College Football 26 Preorders Up For Grabs For PS5 And XboxEA Sports College Football 26 releases for Xbox Series X and PS5 on July 10, though it'll be noticeably absent on Switch 2 and PC (once again). Preorders are starting to open for this year’s installment of the revived franchise--and you’ll unlock plenty of bonuses if you reserve a copy in advance. We’re still waiting on full details of what’s changing for College Football 26, but here’s a look at all preorder bonuses and all versions of the game up for grabs.EA Sports College Football 26 will once again release about a month before Madden. Preorders for Madden NFL 26 opened at the same time as College Football 26, with the pro football sim slated to launch on August 14. All College Football 26 + Madden 26 Preorders Continue Reading at GameSpot0 Yorumlar 0 hisse senetleri 24 Views
-
GAMERANT.COMGameStop Has Good News for Fans That Missed Getting a Switch 2 Pre-OrderVideo game retailer GameStop has said that it is canceling bot and duplicate orders of the Nintendo Switch 2. Nintendo Switch 2 pre-orders went live in the US on April 24 and caused a great deal of chaos online. As expected, millions of people rushed to their online retailer of choice in an attempt to secure a Switch 2 pre-order of their own, with varying degrees of success.0 Yorumlar 0 hisse senetleri 23 Views
-
WWW.POLYGON.COMEA’s terrific draft-day ad has me pumped for College Football 26 and Madden NFL 26Are you ready for some football? There’s never really an offseason for America’s most popular sport, but the biggest event on the offseason calendar — the NFL draft — begins Thursday night, and Electronic Arts has marked the occasion with a great draft-themed ad for EA Sports College Football 26 and Madden NFL 26, alongside the confirmation of the games’ respective release dates this summer. EA’s two-and-a-half-minute commercial “The Call” — which will air during coverage of the draft on ABC, ESPN, and NFL Network — focuses on a fictional wide receiver named J.D. Matthews Jr. We see everything that goes through his mind when Washington Commanders head coach Dan Quinn calls him to let him know that the team is taking him with the 29th overall pick in the draft. And as Quinn recounts the chip-on-his-shoulder story of Matthews Jr.’s football career through high school and college, we see the emotion play out on the Texas Longhorns star’s face. The draft is the great crossroads between college football and the NFL, the place where NCAA standouts hope to realize their lifelong dream of becoming professional athletes. For those lucky few who get that life-changing phone call on draft day, it’s the culmination of so much hard work — and, of course, a new beginning, with all kinds of possibilities and pitfalls ahead. “The work’s just starting,” Quinn reminds his new rookie wideout. EA’s pitch here is that football fans can experience the full spectrum of this journey by playing both EA Sports College Football 26 and Madden NFL 26. The publisher announced Thursday that this year’s iteration of the franchise formerly known as NCAA Football will be released July 10 on PlayStation 5 and Xbox Series X. It will be followed on Aug. 14 by Madden, which will be available on Nintendo Switch 2 and Windows PC as well as PS5 and Xbox Series X. That’s right: EA is now leaving last-generation consoles behind, making 2024’s Madden NFL 25 the final entry in the series to be released on PlayStation 4 and Xbox One. Speaking of older consoles, Madden NFL 26’s Switch 2 version will end a nearly 13-year drought for Madden games on Nintendo platforms — the last one was a scaled-back version of Madden NFL 13, which was a Wii U launch title back in November 2012. EA has not yet released any real information about the modes and features of this year’s Madden game; we’ve asked the company about whether there will be feature parity between the Switch 2 version and the PC/PS5/Xbox Series X versions, and we’ll update this article when we receive a response. Pre-orders are now available for College Football 26 and Madden NFL 26 on all platforms. (While you can’t get the Switch 2 version through EA’s website, Amazon is taking pre-orders for a game-key card “physical” copy of it.) Just like last year, people who know they want to play both titles can get them in a discounted package known as the MVP Bundle. This set, which is only available digitally and only on PS5 and Xbox Series X, packs together the deluxe editions of the college and pro games for $149.99 — a savings of $50 over buying them separately for $99.99 a pop. Along with a raft of in-game bonuses for modes such as Ultimate Team, Road to Glory, Dynasty, Franchise, and Superstar, each Deluxe Edition comes with three days’ worth of early access to the game in question. (You can see the full list of pre-order goodies on the College Football and Madden websites.)0 Yorumlar 0 hisse senetleri 29 Views
-
DESIGN-MILK.COMTuscan Colors Transform Ontario Home by Studio BroccaThe green marble arrived before the foundation. This single fact reveals the priorities that would shape this Ontario home, designed by Studio Brocca, where Italian heritage and contemporary design sensibilities converge with thoughtful clarity. Standing on land once occupied by a 1940s property, the new residence serves as a prelude to the French chateau visible beyond, yet its soul speaks unmistakably of Tuscany. “The green marble is the first item we selected for the home, before we broke ground – and we did anything to make it happen,” Samantha Brocca says. The homeowners continue by saying, “Greens are not only our favorite color but they associate them with the beauty of the Italian countryside, rolling hills, and deep green cypress. Seeing as the house is set in a matured green space, greens also complement the exterior feel. The green marble was also brought in in the open concept closet in the bedroom, visible from all sizes surrounded by black metalwork to frame the opening and create a feeling of a light partition and interest in materials.” The 3,500-square-foot residence represents a balancing act between two design languages. Minimal architectural lines provide structure, while curves and arches create counterpoints of softness. This tension between linearity and fluidity manifests throughout the space, from circular light fixtures to the pickets of the staircase. What emerges might best be described as ‘warm minimalism’ – a term Brocca uses to capture the home’s essence. The palette draws directly from the Italian countryside, with deep greens reminiscent of cypress trees, and rust tones of terra cotta. “We feel that the color palette reminds us of the Tuscan wine country and rolling hills, but the contemporary touches bring the Tuscan palate into modern design,” the family says. “The overall feeling resembles and relates to multiple regions we have been so lucky to enjoy and keeps the gray months of Canada richer with warmth and color. The stacked wood framed openings in the bedroom give the feeling of rows of trees in Tuscan wine country.” For more information on Studio Brocca, visit studiobrocca.com. Photography by Lauren Miller.0 Yorumlar 0 hisse senetleri 29 Views
-
LIFEHACKER.COMThree Ways People With Student Loan Debt Can Protect Their Credit ScoresThe Department of Education announced Monday that the Federal Student Aid (FSA) will restart collections on defaulted student loans beginning May 5. Even before this news, millions of borrowers were already seeing their credit scores plunge in recent months, and loan servicers are warning that a record number of borrowers are at risk of defaulting by the end of the year. I recently covered the basics of what you need to know about the upcoming changes, as well as how to prepare for them. Now, let's dive a little deeper into how borrowers suffering through collections can navigate their financial future.What the end of the pause means for your finances"Many people have been feeling like they're in some sort of personal financial recession for years now," says Lauren Bringle, an accredited financial counselor at credit-building platform Self Financial. And if you've been carrying credit card debt, you know that those higher interest rates may have caused what you owe to increase significantly.Factor in the cost of many monthly expenses increasing—groceries, gas, eggs—all while salaries have stayed stagnant. "Now add in that student loan payments have resumed, and for some, that means hundreds of dollars in extra expenses monthly," Bringle notes. Especially after the five-year pause on payments that began during the COVID-19 pandemic, many borrowers are having to significantly readjust and re-evaluate their budgets. With all of the additional costs, it's left millions of Americans stretched beyond their means.Strategies to protect and rebuild your creditHere's what you can do to navigate a hit to your credit score. 1. Free up money wherever you canIf your income is limited and you simply don't make enough to cover your student loan payments, Bringle suggests an income-driven repayment plan for federal student loans. "The federal student loan landscape has been rapidly changing, but you may be able to qualify for lower monthly payment options (even down to $0/month in some cases) based on your income," advises Bringle. You can learn more and apply at studentaid.gov here.2. Prioritize your credit"Credit is an essential part of your overall financial profile because it opens the door to long-term financial goals, such as renting an apartment or securing a mortgage," Bringle explains. Missed loan payments can significantly impact your credit because payment history accounts for 35% of your FICO credit score, making on-time payments critical.If your credit has already taken a hit due to missed student loan payments, consider alternative ways to rebuild. For instance, something as small as implementing this payment schedule helps to send your credit score in the right direction.Additionally, Bringle recommends organizations like Operation Hope, NFCC, and AFCPE, where credit counselors can review your income, expenses, debts, and overall financial picture to help you create a personalized budget and spending plan.3. Keep building positive money habitsRegardless of where you stand financially, focus on developing positive short-term habits. Especially in the face of something like student debt, it helps to control whatever you can.Stick to your budget. "As you're setting up your financial goals, make sure you have a really clear view of your overall finances," says Bringle. "Your budget plays an important role in helping your credit score, because it helps you track your expenses, and ensures that you are able to pay your monthly payments on time, and in full." Here's my guide to evaluating and making strategic cuts to your budget.Make payments on time. Setting up automatic payments can help ensure you don't miss due dates, which would negatively impact your score. "Depending on the conditions of your student loans, borrowers usually have up to six months after graduation before they have to start making payments," Bringle notes. "Be sure to check your loan and know exactly when your first payment is due so you can plan ahead and pay on time, since payment history is a critical piece to building and maintaining healthy credit." To find out exactly how much you’re expected to pay, head to studentaid.gov.Hack your credit utilization. Credit utilization is the second-largest factor of your FICO score, so it's important not to use too much of your available credit. The general rule is to stay below the 30% threshold, but even lower is better. Using more than 30% of your available credit can affect your credit utilization, which could ultimately decrease your score. For example, if your credit limit is $1,000, you should not put more than $300 on your credit card before paying down the balance.Review your credit report regularly. Checking your report gives you a clear understanding of your credit health and what might be impacting your score. You can review payment history, recent balances reported to credit bureaus, accounts under your name, and identify negative items like collections that need to be addressed. Free copies of your credit reports from Experian, Equifax, and TransUnion are available at annualcreditreport.com.Looking aheadBringle emphasizes the importance of preparation: "Make sure your budget is set up to support your payments, start setting aside the payments from your monthly budget to build the habit, and set up autopay if you can to reduce the chances of a late payment."By taking proactive steps now, you can protect and rebuild your credit score as much as possible before student loan collections resume.0 Yorumlar 0 hisse senetleri 26 Views
-
WWW.ENGADGET.COMPerplexity is building a browser in part to collect customer data for targeted adsAI company Perplexity announced in February that it was building its own browser called Comet. In a recent interview with the TBPN podcast, CEO Aravind Srinivas gave some insight as to why the business appeared to be branching out from its artificial intelligence focus: It's to collect user data and sell them targeted advertisements. "That’s kind of one of the other reasons we wanted to build a browser, is we want to get data even outside the app to better understand you," he said. “We plan to use all the context to build a better user profile and, maybe you know, through our discover feed we could show some ads there.” If that all sounds familiar, it could be become Google's Chrome browser has taken a similar approach. In fact, Comet is built on Chromium, the open-source browser base from Google. That's not to say Perplexity wouldn't take the chance to go straight to the source and acquire Chrome in the aftermath of Google's recent monopoly court ruling regarding online search. In the ongoing hearings about Google and its potential sale of Chrome, Chief Business Officer Dmitry Shevelenko said he thought Perplexity would be able to continue running the browser at its current scale. Unsurprisingly, he wasn't too keen on OpenAI acquiring the property.This article originally appeared on Engadget at https://www.engadget.com/ai/perplexity-is-building-a-browser-in-part-to-collect-customer-data-for-targeted-ads-230132091.html?src=rss0 Yorumlar 0 hisse senetleri 26 Views