What if we could visualize how training data gets poisoned using networks? As machine learning models become increasingly integral to software applications, they also face increased risks from adversarial attacks, notably data poisoning. Data poisoning attacks involve the intentional injection into, modification of, or deletion of data from training datasets, undermining the reliability and accuracy of such statistical models as generative AI. This presentation demonstrates a novel approach to understanding and identifying data poisoning through visualization techniques in Gephi, an open-source network analysis tool. By leveraging Gephi’s advanced visualization capabilities, this talk will map out how poisoned data influences the structure and behavior of neural networks, highlighting anomalous patterns indicative of poisoning. Through two case studies, we will visualize irregularities, such as clustering and unexpected node behavior, which are common indicators of data manipulation. This talk will underscore the value of network visualization in threat detection, offering hackers a new perspective on the intersection of network science and machine learning vulnerabilities.