LAB-005 โ€” NEURAL INFRASTRUCTURE

Neural Network

Evaluating Open-Weight Models

Evaluating LLMs, image synthesis architectures, and high-dimensional vector embeddings on local GPU hardware. Ensuring total data sovereignty for security research while benchmarking multi-modal inference.

Neural Network Architecture

Evaluating local models.

The neural infrastructure in the Buildations lab provides a controlled environment for testing and evaluating a variety of open-weight models. By running LLMs, image synthesis, and embedding models on distinct hardware profiles, we study inference efficiency and data sovereignty implications for autonomous defense systems.

Key References

Touvron, H., et al. (2023). "LLaMA 2: Open Foundation and Fine-Tuned Chat Models." arXiv preprint arXiv:2307.09288.

Local inference pipeline.

All AI workloads run on local GPU hardware โ€” no cloud APIs, no data leaving the network. Ollama serves LLM inference, Qdrant stores embeddings for RAG, and ComfyUI handles image generation through Stable Diffusion.

๐Ÿง 
LLM Engine
Ollama

Multi-model serving with GPU acceleration, parallel inference, hot-swap models

๐Ÿ’ฌ
Chat Interface
Open WebUI

Full-featured chat UI with document upload, system prompts, conversation history

๐Ÿ”ฎ
Vector Database
Qdrant

Semantic search, document embeddings, RAG context retrieval

๐ŸŽจ
Image Generation
ComfyUI ยท SDXL

Node-based workflow for Stable Diffusion with custom checkpoints

Language Model

LLaMA 3.2

Advanced language understanding with GPU-accelerated inference. Conversational AI, code generation, and analysis.

3BParameters
128KContext
Image Generation

Stable Diffusion XL

High-quality image synthesis from text prompts via ComfyUI node workflows with custom checkpoints.

1024Resolution
2.6BParameters
Speech

Whisper Large

Multilingual speech recognition and translation. Local processing with no audio data leaving the network.

99+Languages
1.5BParameters
Code

CodeLlama

Specialized code understanding, generation, and debugging across multiple programming languages.

34BParameters
100KContext
Vision-Language

CLIP

Image classification and semantic search. Powers visual similarity queries in the vector database.

400MParameters
MultiModal
Vectors

Embeddings

Semantic similarity and RAG pipeline. Documents are chunked, embedded, and stored in Qdrant for context retrieval.

768Dimensions
FastInference

Adaptive security ML.

The neural network doesn't just serve chatbots โ€” it powers the adaptive defense engine. Security logs from honeypots and IDS are processed by ML models that learn to identify new attack patterns, creating a self-improving security posture.

INPUT LAYER
Data Ingestion

Honeypot logs (Cowrie, Dionaea, Heralding), Suricata IDS alerts, Zeek network metadata synced every 5 minutes from edge node.

PROCESSING LAYER
Feature Extraction

Attack vectors, credential patterns, payload signatures, protocol anomalies, and temporal patterns extracted and vectorized for model training.

OUTPUT LAYER
Automated Response

Threat classification, automated bans via CrowdSec + Fail2ban, rule updates, real-time ntfy notifications, and continuous model retraining.

Output will appear here...
Ollama ยท GPU Accelerated