Measurable Data and AI Progress Every Single Day

We transform your Microsoft technology stack into competitive advantage through transparent, daily improvements you can see and measure.

Your Ability To Drive Decisions from Data Gets Better Every Single Day

Our Commitment to Transparency

We believe complete transparency at every step builds the trust that makes transformation successful. That's why we share daily progress updates, quantify every improvement, and ensure you're never left wondering what's happening with your data platform.

Our Specialisations

Deep expertise in specialised areas of data and analytics, built on Microsoft technologies with daily progress transparency.

⚖️

Analytics Governance

Establish robust governance frameworks for your analytics estate. Ensure data quality, security, and compliance across your organisation.

⚙️

Analytics Platform Setup/Review

Comprehensive setup and review of your analytics platforms to ensure optimal configuration, security, and performance.

🎯

Self-Service BI Adoption Support

Empower your teams to make data-driven decisions independently. We help organisations build sustainable self-service analytics capabilities.

🔗

Open Data Integration

Seamlessly integrate external open data sources to enrich your analytics. Leverage public datasets for enhanced insights.

🎓

BI Developer Training

Upskill your technical teams with practical, hands-on training in modern BI tools and techniques. Tailored programmes for all skill levels.

📈

Digital Marketing Analytics

Transform marketing data into actionable insights. Track campaigns, measure ROI, and optimise digital marketing performance.

Latest Articles

The Hidden Cost of MCPs and Custom Instructions on Your Context Window

Large context windows sound limitless—200K, 400K, even a million tokens. But once you bolt on a few MCP servers, dump in a giant CLAUDE.md, and drag a long chat history behind you, you can easily burn over 50% of that window before you paste a single line of code. This post is about that hidden tax—and how to stop paying it. Where This Started This exploration started when I came across a LinkedIn post by Johnny Winter featuring a YouTube video about terminal-based AI tools and context management. The video demonstrates how tools like Claude Code, Gemini CLI, and others leverage project-aware context files—which got me thinking about what’s actually consuming all that context space. Video by NetworkChuck ℹ️ Note: While this post uses Claude Code for examples, these concepts apply to any AI coding agent—GitHub Copilot, Cursor, Windsurf, Gemini CLI, and others. The Problem: You’re Already at 50% Before You Start Think of a context window as working memory. Modern AI models have impressive limits (as of 2025): Claude Sonnet 4.5: 200K tokens (1M beta for tier 4+) GPT-5: 400K tokens via API Gemini 3 Pro: 1M input tokens A token is roughly 3-4 characters, so 200K tokens equals about 150,000 words. That sounds like plenty, right? Here’s what actually consumes it: System prompt and system tools MCP server tool definitions Memory files (CLAUDE.md, .cursorrules) Autocompact buffer (reserved for conversation management) Conversation history Your code and the response being generated By the time you add a few MCPs and memory files, a large chunk of your context window is already gone—before you’ve written a single line of code. Real Numbers: The MCP Tax Model Context Protocol (MCP) servers make it easier to connect AI agents to external tools and data. But each server you add costs tokens. Here’s what my actual setup looked like (from Claude Code’s /context command): MCP tools alone consume 16.3% of the context window—before I’ve even started a conversation. Combined with system overhead, I’m already at 51% usage with essentially zero messages. The Compounding Effect The real problem emerges when overhead compounds. Here’s my actual breakdown: Category Tokens % of Window System prompt 3.0k 1.5% System tools 14.8k 7.4% MCP tools 32.6k 16.3% Custom agents 794 0.4% Memory files 5.4k 2.7% Messages 8 0.0% Autocompact buffer 45.0k 22.5% Free space 99k 49.3% Total: 101k/200k tokens used (51%) You’re working with less than half your theoretical capacity—and that’s with essentially zero conversation history. Once you start coding, the available space shrinks even further. Why This Matters: Performance and Quality Context consumption affects more than just space: Processing Latency: Empirical testing with GPT-4 Turbo shows that time to first token increases by approximately 0.24ms per input token. That means every additional 10,000 tokens adds roughly 2.4 seconds of latency to initial response time. (Source: Glean’s research on input token impact) Cache Invalidation: Modern AI systems cache frequently used context. Any change (adding an MCP, editing instructions) invalidates that cache, forcing full reprocessing. Quality Degradation: When context gets tight, models may: Skip intermediate reasoning steps Miss edge cases Spread attention too thinly across information Fill gaps with plausible but incorrect information Truncate earlier conversation, losing track of prior requirements I’ve noticed this particularly in long coding sessions. After discussing architecture early in a conversation, the agent later suggests solutions that contradict those earlier decisions—because that context has been truncated away. Practical Optimization: Real-World Example Let me share a before/after from my own setup: Before Optimization: 10+ MCPs enabled (all the time) MCP tools consuming 32.6k tokens (16.3%) Only 99k tokens free (49.3%) Frequent need to summarize/restart sessions After Optimization: 3-4 MCPs enabled by default MCP tools reduced to ~12k tokens (~6%) Memory files trimmed to essentials (~3k tokens) Over 140k tokens free (70%+) Results: More working space, better reasoning quality, fewer context limit issues, and faster responses. Optimization Checklist Before adding another MCP or expanding instructions: Have I measured my current context overhead? Is my custom instruction file under 5,000 tokens? Do I actively use all enabled MCPs? Have I removed redundant or outdated instructions? Could I accomplish this goal without consuming more context? In Claude Code: Use the /context command to see your current context usage breakdown. Specific Optimization Strategies 1. Audit Your MCPs Regularly Ask yourself: Do I use this MCP daily? Weekly? Monthly? Could I accomplish this task without the MCP? Action: Disable MCPs you don’t use regularly. Enable them only when needed. Impact of Selective MCP Usage By selectively disabling MCPs you don’t frequently use, you can immediately recover significant context space. This screenshot shows the difference in available context when strategically choosing which MCPs to keep active versus loading everything. In Claude Code, you can toggle MCPs through the settings panel. This simple action can recover 10-16% of your context window. 2. Ruthlessly Edit Custom Instructions Your CLAUDE.md memory files, .cursorrules, or copilot-instructions.md should be: Concise (under 5,000 tokens) Focused on patterns, not examples Project-specific, not general AI guidance Bad Example: When writing code, always follow best practices. Use meaningful variable names. Write comments. Test your code. Follow SOLID principles. Consider performance. Think about maintainability... (Continues for 200 lines) Good Example: Code Style: - TypeScript strict mode - Functional patterns preferred - Max function length: 50 lines - All public APIs must have JSDoc Testing: - Vitest for unit tests - Each function needs test coverage - Mock external dependencies 3. Start Fresh When Appropriate Long conversations accumulate context. Sometimes the best optimization is: Summarizing what’s been decided Starting a new session with that summary Dropping irrelevant historical context 4. Understand Autocompact Buffer Claude Code includes an autocompact buffer that helps manage context automatically. When you run /context, you’ll see something like: Autocompact buffer: 45.0k tokens (22.5%) This buffer reserves space to prevent hitting hard token limits by automatically compacting or summarizing older messages during long conversations. It maintains continuity without abrupt truncation—but it also means that 22.5% of your window is already taken. You can also see and toggle this behavior in Claude Code’s /config settings: In this screenshot, Auto-compact is enabled, which keeps a dedicated buffer for summarizing older messages so long conversations stay coherent without suddenly hitting hard context limits. Claude Code Specific Limitations: The Granularity Problem Claude Code currently has a platform-level limitation that makes fine-grained control challenging, documented in GitHub Issue #7328: “MCP Tool Filtering”. The Core Issue: Claude Code loads ALL tools from configured MCP servers. You can only enable or disable entire servers, not individual tools within a server. The Impact: Large MCP servers with 20+ tools can easily consume 50,000+ tokens just on definitions. If a server has 25 tools but you only need 3, you must either: Load all 25 tools and accept the context cost Disable the entire server and lose access to the 3 tools you need Build a custom minimal MCP server (significant development effort) This makes tool-level filtering essential for context optimization, not just a convenience. The feature is under active development with community support. In the meantime: Use MCP servers sparingly Prefer smaller, focused servers over large multi-tool servers Regularly audit which servers you actually need enabled Provide feedback on the GitHub issues to help prioritize this feature Key Takeaways You’re burning a huge portion of your context window before you even paste in your first file. MCP tools alone can consume 16%+ of your window. System tools add another 7%. The autocompact buffer reserves 22%. It adds up fast. Optimization is ongoing. Regular audits of MCPs and memory files keep your agent running smoothly. Aim to keep baseline overhead under 30% of total context (excluding the autocompact buffer). Measurement matters. Use /context in Claude Code to monitor your overhead. You can’t optimize what you don’t measure. Performance degrades subtly. Latency increases roughly 2.4 seconds per 10,000 tokens based on empirical testing. Reasoning quality drops as context fills up. Start minimal, add intentionally. The best developers using AI agents: Start minimal Add capabilities intentionally Monitor performance impact Optimize regularly Remove what isn’t providing value The goal isn’t to minimize context usage at all costs. The goal is intentional, efficient context usage that maximizes response quality, processing speed, and available working space. Think of your context window like RAM in a computer. More programs running means less memory for each program. Eventually, everything slows down. It’s not about having every tool available. It’s about having the right tools, configured optimally, for the work at hand. Resources Official Documentation Claude Code MCP Documentation Model Context Protocol (MCP) Overview Claude Code Best Practices Claude Code Cost Management Claude Context Windows Research & Performance How Input Token Count Impacts LLM Latency - Glean Community Resources Model Context Protocol Documentation GitHub Copilot Custom Instructions Johnny Winter’s LinkedIn Post on Terminal AI You’ve Been Using AI the Hard Way (Use This Instead) - YouTube Video Have you optimized your AI agent setup? What context window challenges have you encountered? I’d love to hear your experiences and optimization strategies.

23 Nov 2025SelfServiceBI

Guiding AI Agent - Power BI Report Development Example

How LLMs Mirror Human Pattern Recognition LLMs are not thinking machines, but pattern matching algorithms. The reason we feel like they are thinking is because most of our brains is to do pattern matching. It works most of the time. Daniel Kahneman described in his book “Thinking, Fast and Slow” about “System 1” that is fast, instinctive and emotional; “System 2” that is slower, more deliberative, and more logical. My view is that the current LLMs are representing our brain using “System 1”. Which can be used if you have ample experience with the topic you want to use it on. (this is the famous 10,000 Hours principle) If the challange you would like to solve does not have a huge literature or you want to solve problems in a novel way, then that is the time to use “System 2”. But AI does not have a “System 2” option that works well. The relatively newer reasoning models are one attempt to try to solve this problem but they still require ample data in their training set to not start hallucinating. Power BI Report Development in Code Developing Power BI reports using the new PBIR format is relatively new, there is not a lot of data available how to do it. There is not really a best practice because Report development is as much art than it is science. Even using AI for DAX requires “Carefully designed instructions and examples” according to Jeffrey Wang. Using LLM with DAX: LinkedIn Post on NL2DAX Benchmark view this post on LinkedIn. AI Agent Research - Audio blog I started researching how to use the tools available to guide the agents to improve the code they are generating.. I encourage you to listen to it because you will find a lot of useful information in this! AI Agents Unveiled Harnessing Power Dodging Pitfalls And Protecting Your Brain In The Age Of AI by Mihaly Kavasi (generated by NotebookLM) New to NotebookLM? Agent instructions Almost all agents have a prompt file you can adjust to guide you Agent how you want it to interact with the code you are working on. AI Agent Name Instruction File Name Claude claude.md GitHub Copilot copilot-instructions.md Gemini gemini-config.yaml Llama llama-instructions.md Mistral mistral-prompt.md Lovable knowledge.md Cursor .cursorrules These are the files you can populate, here is an example of teaching the AI to understand the PBIR format. Notice that you need to have a detailed understanding of how Power BI works, in order to be able to validate and correct the information the AI generates for itself. Prompt steps example for claude.md 1. Init 2. Extend claude.md with Static Resources containing the images and icons added to Power BI, as well as the Base Theme and the Applied Theme. The way Power BI decides how to display a report element is based on the following hierarchy: attribute specified in the visual; if not, attribute specified in the registered theme; if not, attribute specified in the base theme; if not, the default value. This is relevant to understand where certain design settings are set within the report. 3. Extend claude.md to include: • report.json contains the report-level filters • pages.json contains the settings for the opening page when opening the report • page.json contains the page-level filters • visual.json contains visual-level filtering 4. Extend claude.md with the information about where to find the names of entities such as pages, visuals, and bookmarks. 5. In claude.md, clarify the definition of visual names by checking multiple visuals, since some visuals have names and others do not... Join me: I will show you in a live demo how to work with AI Agents in Power BI Join the London FABUG Community Interested in learning more about AI agents and Power BI development? Join us at the next London Fabric and Power BI User Group meetup! Recommend you to watch this video to understand where AI Agents and LLMs in general are heading. How to change your prompts for GPT-5 watch on YouTube. Conclusion Working with AI agents in specialized domains like PBIR development requires bridging the gap between pattern matching and true reasoning. We must act as the “System 2” - providing detailed instructions, validating outputs, and correcting misconceptions through instruction files that transfer our domain expertise. The key takeaway? AI agents can accelerate Power BI development when paired with human expertise. Success requires understanding both the technology (Power BI, DAX, PBIR) and how to effectively communicate this knowledge to AI partners. The future isn’t about replacing human expertise - it’s about amplifying it through thoughtful human-AI collaboration. As tools evolve, our role shifts from coding everything to skillfully guiding AI agents toward desired outcomes. Resources Claude Code Best Practices (Obsidian Publish) Why GitHub Copilot Custom Instructions Matter — Thomas Thornton

12 Aug 2025SelfServiceBI

Creating a Power BI Knowledge base with NotebookLM

Power BI documentation is extensive, but finding the right information when you need it can be challenging. What if you could have an AI assistant that knows all the Power BI documentation inside and out? That’s exactly what we can achieve using Google’s NotebookLM. NotebookLM is Google’s AI-powered research assistant that can analyze and understand large collections of documents, making them searchable and queryable through natural language. In this post, I’ll show you how to create a comprehensive Power BI knowledge base using NotebookLM. Why Create a Power BI Knowledge Base? Power BI has thousands of pages of documentation scattered across Microsoft Learn, community forums, and various resources. Finding specific information often involves: Searching through multiple documentation sites Reading lengthy articles to find relevant sections Trying to remember where you saw that specific feature explanation Piecing together information from different sources A centralized knowledge base powered by AI can solve these problems by providing instant, contextual answers to your Power BI questions. Isn’t just using LLM not good enough? Large language models (LLMs) are incredibly versatile because they’re trained on massive amounts of data. However, this also means they can sometimes struggle to find specific details within a huge web of information. Plus, since LLMs generate responses based on patterns rather than exact facts, they might occasionally provide outdated info or even make things up (a phenomenon known as “hallucination”). Getting Started with Power BI Documentation The first step is gathering the Power BI documentation. Microsoft provides comprehensive documentation on Microsoft Learn, covering everything from basic concepts to advanced features. Power BI Official documentation You’ll might also want to collect documentation covering: Community Tools – Add-ons, custom visuals, and utilities for Power BI. Code Repositories – GitHub/GitLab samples, DAX, Power Query, and scripts. Related Technology Docs – Docs for Entra, Dataverse, Fabric, Synapse, SQL Server. Community Blogs & Videos – Tutorials and tips from the Power BI community. Official Power BI Blog – Updates and best practices from the product team. Conference Materials – Presentations and slides from Power BI events. Sample Datasets & Templates – Example datasets, PBIX files, and templates. FAQ & Troubleshooting – Common issues and solutions from forums. API & Developer Docs – Resources for REST API, embedding, and automation. Security & Compliance – Guides on governance and data protection. You can also take advantage of NotebookLM’s built-in Discover feature to search for additional resources and further enrich your knowledge base. Setting Up NotebookLM Once you have your documentation organized: Visit NotebookLM - Go to notebooklm.google.com Create a new notebook - Start a new project for your Power BI knowledge base Upload documents - Add your collected documentation files Let NotebookLM process - The AI will analyze and index your content NotebookLM supports various file formats including PDF, text files, and even Google Docs. It can handle substantial amounts of content, making it perfect for comprehensive documentation collections. Querying Your Knowledge Base Once your knowledge base is set up, you can start asking questions in natural language: NotebookLM will provide detailed answers with citations, showing you exactly which documents contain the relevant information. Important reminder: Verify! Cross-reference answers with source Test suggested solutions before implementing Keep your documentations up-to-date Conclusion Creating a Power BI knowledge base with NotebookLM transforms how you access and use Power BI documentation. Instead of spending time searching through multiple resources, you can get instant, contextual answers to your questions. Start building your Power BI knowledge base today, and experience the difference of having all Power BI information at your fingertips, ready to answer any question you might have.

20 Jun 2025SelfServiceBI

Ready to See Daily Progress?

Let's discuss how we can transform your data capabilities with transparent, measurable improvements every single day.

Or reach out directly at +44 7495 305 143