Profile image
Jinyoung
Dev

Supercharge Your LLMs: Build Context-Aware AI Applications with Model Context Protocol

Supercharge Your LLMs: Build Context-Aware AI Applications with Model Context Protocol
0 views
26 min read

In November 2024, Anthropic announced a new protocol called MCP. For more details, please refer to this link.

In this article, we will take a closer look at MCP.

Introduction to MCP

MCP stands for Model Context Protocol.

Here, "Model" refers to AI models such as GPT, Claude, and Gemini. "Context" is a term that refers to additional external information provided to the Model. "Protocol" literally means a convention.

Putting it all together, we can summarize it in one sentence as follows:

A protocol for providing external information to AI Models.

Now, let's take a closer look.

Why Provide Context to the Model

First, let's look at the reason for providing context to the model. This is because we must first understand this to truly understand why MCP was created.

LLMs (Large Language Models) like GPT have a "Knowledge cut-off date" due to their technical characteristics. They learn a vast amount of text during the pre-training phase, but they only know the information up to the point when that pre-training was performed.

OpenAI Knowledge Cut-off date

The image above is a capture of the OpenAI model section from a Githug repo that collects the knowledge cut-off dates of major LLMs. If you look closely, you can see that there is a Cut-off Date item for each Model, and this is specified as a specific date. Not only OpenAI models, but all LLMs have this cut-off date.

In other words, the information that the LLM itself knows is only up to the cut-off date, and it cannot answer information after that date. In this case, the LLM should not answer information after that date. It's false information. A very easy way to test the cut-off date of an LLM is to go to any LLM service and ask "How is the weather in Seoul today?".

Claude - weather query
Claude 3.7 Sonnet: Knowledge cut-off date is Oct 2024

When I used Claude, it said it couldn't know the weather in Seoul for today's date. If there is an LLM that has the weather information for March 19, 2025, when I am writing this article, it will probably be a model that will be announced in the future. It's probably being pre-trained hard right now.

Then shall we change the date?

The knowledge cut-off date for Claude 3.7 Sonnet, which I just used, is October 2024, so I'll ask about the weather for a date before that.

Claude - weather query

When I asked about the weather in Seoul in March 2024, it said that the weather information for that date was not in the training data.

Through these two queries, we can identify two limitations that LLMs inherently have:

  1. Knowledge Cut-off date
    • The LLM cannot know about events or information that occurred after the cut-off date.
  2. Information not included in the pre-training or post-training process
    • Even if the information was created before the cut-off date, if it was not included in the learning process, the LLM cannot know it.

Even a very simple query about weather information cannot be properly handled by an LLM. Therefore, it is necessary to look up accurate information in real time or at a time desired by the user and inject it into the LLM. This is called In-context learning. It provides all the information necessary for the LLM to correctly process the task. Do you remember the time shortly after ChatGPT came out? At that time, people were copying and pasting information that they had searched for themselves into the chat interface. And they asked additional questions based on this information.

MCP refers to a protocol for how to provide context to an LLM. Instead of a person directly copying and pasting the necessary information into the chat window, the LLM (or Agent, Client) can dynamically look up/modify the context.

It's difficult to fully understand MCP just by listening to the explanations so far. In this article, I will help you gain a deeper understanding by checking how MCP actually works based on the "weather" use case that I tested briefly earlier.


Background of MCP's Emergence

Then let's take a closer look at the background of MCP's emergence. This is because you can adapt more quickly if you understand this part and see the specific components and actual code of MCP.

There are several difficulties when developing LLM-based services without using MCP.

  1. Need for LLM feature expansion

    • This is related to the inherent limitations of LLMs mentioned earlier.
    • LLMs have powerful natural language processing capabilities, but they cannot directly interact with the external world or perform specific tasks on their own. For example, tasks such as reading files, querying databases, or calling external APIs are impossible with only the LLM's own capabilities.
  2. Complexity of data and tool integration

    • When developing LLM-based applications, it is often necessary to integrate various data sources (file systems, databases, web services, etc.) and tools (search engines, code execution environments, etc.) with the LLM.
    • This integration process is complex and cumbersome, and there is a problem that each application must be implemented individually.
  3. Lack of standardized interface

    • Because there is no standardized protocol that defines how LLMs interact with external resources, collaboration between application developers and LLM providers is difficult, and there is a problem that the reusability and compatibility of developed applications are reduced.
  4. Security and permission management

    • When an LLM accesses external resources, security and permission management are important. There is a need for a mechanism to control access to sensitive information and limit the scope of operations that the LLM can perform.

MCP was introduced to solve these problems. MCP provides a standardized way to connect LLM applications and external resources, like a "USB-C port for AI applications."

Through this:

  • LLMs can access various data and tools through the MCP server.
  • Developers can easily build and share reusable MCP servers.
  • Clients can interact with various servers by supporting MCP.
  • The development and deployment of LLM applications are simplified.
  • Security and permission management are facilitated.

MCP Components

Let's learn about the three main components that make up MCP.

MCP Components
MCP components diagram

1. Host Application

An entity that wants to utilize LLM capabilities through MCP, such as an LLM application (e.g., Claude Desktop, IDE) or an AI tool.

  • Provides an interface that directly interacts with the user.
  • Initializes and manages one or more MCP Clients.
  • Connects to the MCP Server through the MCP Client, sends requests, and receives responses.
  • Starts interaction with the LLM and provides the results to the user.

Specific examples include Claude desktop and Cursor. It is the only point of contact from the user (customer) perspective, and it is an area that service developers do not touch.

2. MCP Client

A component that exists inside the Host Application and manages the 1:1 connection with the MCP Server.

  • Connection management with MCP Server
    • The MCP Client is responsible for establishing and maintaining a 1:1 connection with one or more MCP Servers. This connection allows access to resources, tools, prompts, etc. provided by the MCP Server.
  • Protocol processing
    • The MCP Client implements the message format, request/response pattern, error handling, etc. defined in the Model Context Protocol. Through this, it communicates with the MCP Server in a standardized way.
  • Integration with Host Application
    • The MCP Client is integrated within the Host Application (e.g., Claude Desktop, IDE, other AI tools) and provides an interface so that the Host Application can utilize the capabilities of the MCP Server.

It is one of the elements that can be developed as a service developer.

3. MCP Server

A lightweight program that encapsulates specific capabilities or data sources that can be provided to the LLM.

  • Provides an interface that complies with the Model Context Protocol.
  • Exposes Resources (data), Tools (capabilities), and Prompts (templates) to Clients.
  • Receives and processes requests from Clients and returns the results.
  • Can access local data sources (files, databases, etc.) on its own or interact with external services (APIs).

Let's take a closer look at the three main items that the MCP server exposes to the MCP client.

Server ➡️ Client: Tools

Executable capabilities provided by the MCP Server. The LLM can call Tools to perform specific tasks or interact with external systems.

  • Key Features
    • Model-controlled: Tools are automatically called by the LLM (of course, with the user's approval). That is, the LLM decides which Tool to call based on the given context and task.
    • Action-based: Tools go beyond simply providing data and perform actions. For example, they can perform tasks such as manipulating the file system, querying a database, or calling an external API.
    • JSON Schema-based definition: Each Tool clearly defines input parameters (input schema) using JSON Schema. Through this, the LLM can understand how to use the Tool and call it in the correct format.
  • How it Works
    1. Discovery: The Client obtains the list of Tools provided by the Server and the input schema of each Tool through the tools/list request.
    2. Invocation: The LLM decides to call a specific Tool and sends a tools/call request to the Client. This request includes the Tool name and input parameters that match the JSON Schema.
    3. Execution: The Server executes the logic for that Tool and returns the result to the Client.
    4. Error Handling: If an error occurs during Tool execution, the Server returns a response containing error information.
  • Examples
    • execute_command: A Tool that executes shell commands
    • github_create_issue: A Tool that creates an issue on GitHub
    • analyze_csv: A Tool that analyzes CSV files
    • calculate_sum: A Tool that adds two numbers (a simple example)
    • get_forecast: A Tool that looks up date information based on latitude and longitude 👈 We will implement this Tool directly!

Server ➡️ Client: Resources

Data provided by the MCP Server. Various forms of data, such as files, database records, API responses, screenshots, and log files, can be Resources.

  • Key Features

    • Application-controlled: Resources are explicitly selected by the Client application and provided to the LLM. That is, the Client application decides which Resource to provide to the LLM as context.
    • URI-based identification: Each Resource is identified by a unique URI (Uniform Resource Identifier).
    • Text or Binary: Resources can contain text data (UTF-8) or binary data (base64 encoded).
    • URI Templates: For dynamic Resources, URI Templates (RFC 6570) are used so that the Client can create valid URIs.
  • How it Works

    1. Discovery: The Client obtains a list of Resources (or a list of URI Templates) provided by the Server through the resources/list request.

    2. Read: The Client sends a resources/read request using the URI of a specific Resource, and the Server returns the contents of that Resource.

    3. Updates: The Server can notify the Client of changes to Resources through notifications/resources/list_changed (Resource list changed) or notifications/resources/updated (specific Resource content changed) notifications.

  • Examples

    • file:///home/user/documents/report.pdf (file)
    • postgres://database/customers/schema (database schema)
    • screen://localhost/display1 (screenshot)
    • logs://recent?timeframe=1h (log file, using URI Template)

Server ➡️ Client: Prompts

Reusable prompt templates provided by the MCP Server. Prompts are used to instruct the LLM to perform a specific task or to induce it to respond in a specific format.

  • Key Features
    • User-controlled: Prompts are intended to be explicitly selected and used by the user. That is, the Client obtains a list of Prompts from the Server, shows it to the user, and operates in a way that executes the Prompt selected by the user.
    • Dynamic arguments: Prompts can have dynamic arguments. For example, a "code analysis" Prompt can take the path of the code file to be analyzed as an argument.
    • Multi-step workflows: Prompts can include multiple steps of LLM interaction. (e.g., a "debug error" Prompt might first receive an error message, ask the LLM for analysis, and receive additional input from the user.)
    • UI Integration: Prompts can be presented to the user in the form of slash commands, quick execution menus, context menus, etc. in the Client UI.
  • How it Works
    1. Discovery: The Client obtains a list of Prompts provided by the Server through the prompts/list request.
    2. Get: The Client sends a prompts/get request containing the name and arguments of the Prompt selected by the user.
    3. Execution: The Server creates an actual message based on the message template defined in that Prompt and returns it to the Client. This message may also include the contents of the Resource.
  • Examples
    • git-commit: A Prompt that creates a Git commit message
    • explain-code: A Prompt that explains code
    • analyze-project: A Prompt that analyzes project logs and code (including Resources)
    • debug-error: A Prompt that debugs errors (Multi-step workflow)

The components of the MCP server can be summarized as follows:

FeatureToolsResourcesPrompts
PurposeProvide executable capabilitiesProvide accessible dataProvide reusable prompt templates
ControlModel-controlled (automatic)Application-controlled (explicit)User-controlled (explicit)
IdentificationUnique nameURIUnique name
InputParameters defined by JSON SchemaURI (or URI Template)Dynamic arguments
OutputExecution result (text, image, error, etc.)Text or binary dataCompleted message (ready to be sent to LLM)

Technical Implementation

You've come a long way! Now let's look at the actual code and explain the specific implementation.

Now let's implement the Weather use case, one of the examples provided in the official MCP documentation, in Node.

System requirements

node must be installed on the local computer to be developed. It should be the latest version if possible, and if it is not installed, you must visit nodejs.org and install it.

Setup

Now you need to create a suitable working directory.

working_directory
cd {your workspace directory}
mkdir weather # `md weather` for Windows
cd weather

# Initialize a new npm project
npm init -y

# Install dependencies
npm install @modelcontextprotocol/sdk zod
npm install -D @types/node typescript

# Create our files
mkdir src # `md src` for Windows
touch src/index.ts # `new-item src\index.ts` for Windows
  • Install a total of 4 dependencies for this example.
    • @modelcontextprotocol/sdk: MCP library
    • zod: Schema declaration library used when specifying Tool parameters
    • @types/node, typescript: Libraries required to build and run the application
package.json
{
  "type": "module",
  "bin": {
    "weather": "./build/index.js"
  },
  "scripts": {
    "build": "tsc && chmod 755 build/index.js"
  },
  "files": [
    "build"
  ],
}
  • Modify the package.json file created through the npm init -y command as above.
  • Line 7: Build the typescript file with the tsc command and then change the permissions of the build/index.js file.
tsconfig.json
{
  "compilerOptions": {
    "target": "ES2022",
    "module": "Node16",
    "moduleResolution": "Node16",
    "outDir": "./build",
    "rootDir": "./src",
    "strict": true,
    "esModuleInterop": true,
    "skipLibCheck": true,
    "forceConsistentCasingInFileNames": true
  },
  "include": ["src/**/*"],
  "exclude": ["node_modules"]
}
  • Create a tsconfig.json file in the weather directory and add the above content.

Source code - MCP server

Project setup is complete. Now it's time to work on the code.

src/index.ts
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod";

const NWS_API_BASE = "https://api.weather.gov";
const USER_AGENT = "weather-app/1.0";

// Create server instance
const server = new McpServer({
  name: "weather",
  version: "1.0.0",
});
  • Create an index.ts file under the src directory and enter the above code.
  • Line 9-12: Create an MCP server instance named weather.

Now we need to add Helper functions. Add the code below to the index.ts file you just created.

src/index.ts
// Helper function for making NWS API requests
async function makeNWSRequest<T>(url: string): Promise<T | null> {
  const headers = {
    "User-Agent": USER_AGENT,
    Accept: "application/geo+json",
  };

  try {
    const response = await fetch(url, { headers });
    if (!response.ok) {
      throw new Error(`HTTP error! status: ${response.status}`);
    }
    return (await response.json()) as T;
  } catch (error) {
    console.error("Error making NWS request:", error);
    return null;
  }
}

interface AlertFeature {
  properties: {
    event?: string;
    areaDesc?: string;
    severity?: string;
    status?: string;
    headline?: string;
  };
}

// Format alert data
function formatAlert(feature: AlertFeature): string {
  const props = feature.properties;
  return [
    `Event: ${props.event || "Unknown"}`,
    `Area: ${props.areaDesc || "Unknown"}`,
    `Severity: ${props.severity || "Unknown"}`,
    `Status: ${props.status || "Unknown"}`,
    `Headline: ${props.headline || "No headline"}`,
    "---",
  ].join("\n");
}

interface ForecastPeriod {
  name?: string;
  temperature?: number;
  temperatureUnit?: string;
  windSpeed?: string;
  windDirection?: string;
  shortForecast?: string;
}

interface AlertsResponse {
  features: AlertFeature[];
}

interface PointsResponse {
  properties: {
    forecast?: string;
  };
}

interface ForecastResponse {
  properties: {
    periods: ForecastPeriod[];
  };
}
  • makeNWSRequest
    • Sends an HTTP GET request to the National Weather Service (NWS) API and parses the response into JSON format.
  • formatAlert
    • Converts one feature object of the alert data received from the NWS API into a human-readable string format.

Then let's add the most important tool execution handler code. Similarly, add the code below to the src/index.ts file.

src/index.ts
// Register weather tools
server.tool(
  "get-alerts", // Tool name
  "Get weather alerts for a state", // Tool description
  {
    state: z.string().length(2).describe("Two-letter state code (e.g. CA, NY)"),
  }, // input parameters (Zod schema)
  async ({ state }) => { // Tool logic (async function)
    const stateCode = state.toUpperCase();
    const alertsUrl = `${NWS_API_BASE}/alerts?area=${stateCode}`;
    const alertsData = await makeNWSRequest<AlertsResponse>(alertsUrl);

    if (!alertsData) {
      return {
        content: [
          {
            type: "text",
            text: "Failed to retrieve alerts data",
          },
        ],
      };
    }

    const features = alertsData.features || [];
    if (features.length === 0) {
      return {
        content: [
          {
            type: "text",
            text: `No active alerts for ${stateCode}`,
          },
        ],
      };
    }

    const formattedAlerts = features.map(formatAlert);
    const alertsText = `Active alerts for ${stateCode}:\n\n${formattedAlerts.join("\n")}`;

    return {
      content: [
        {
          type: "text",
          text: alertsText,
        },
      ],
    };
  },
);

server.tool(
  "get-forecast",
  "Get weather forecast for a location",
  {
    latitude: z.number().min(-90).max(90).describe("Latitude of the location"),
    longitude: z.number().min(-180).max(180).describe("Longitude of the location"),
  },
  async ({ latitude, longitude }) => {
    // Get grid point data
    const pointsUrl = `${NWS_API_BASE}/points/${latitude.toFixed(4)},${longitude.toFixed(4)}`;
    const pointsData = await makeNWSRequest<PointsResponse>(pointsUrl);

    if (!pointsData) {
      return {
        content: [
          {
            type: "text",
            text: `Failed to retrieve grid point data for coordinates: ${latitude}, ${longitude}. This location may not be supported by the NWS API (only US locations are supported).`,
          },
        ],
      };
    }

    const forecastUrl = pointsData.properties?.forecast;
    if (!forecastUrl) {
      return {
        content: [
          {
            type: "text",
            text: "Failed to get forecast URL from grid point data",
          },
        ],
      };
    }

    // Get forecast data
    const forecastData = await makeNWSRequest<ForecastResponse>(forecastUrl);
    if (!forecastData) {
      return {
        content: [
          {
            type: "text",
            text: "Failed to retrieve forecast data",
          },
        ],
      };
    }

    const periods = forecastData.properties?.periods || [];
    if (periods.length === 0) {
      return {
        content: [
          {
            type: "text",
            text: "No forecast periods available",
          },
        ],
      };
    }

    // Format forecast periods
    const formattedForecast = periods.map((period: ForecastPeriod) =>
      [
        `${period.name || "Unknown"}:`,
        `Temperature: ${period.temperature || "Unknown"}°${period.temperatureUnit || "F"}`,
        `Wind: ${period.windSpeed || "Unknown"} ${period.windDirection || ""}`,
        `${period.shortForecast || "No forecast available"}`,
        "---",
      ].join("\n"),
    );

    const forecastText = `Forecast for ${latitude}, ${longitude}:\n\n${formattedForecast.join("\n")}`;

    return {
      content: [
        {
          type: "text",
          text: forecastText,
        },
      ],
    };
  },
);
  • Line 2, 50: Call the tool function of the server instance created earlier to register the Tool. The tool function takes a total of 4 arguments.
    1. Tool name: The unique name of the Tool. Must be unique within the MCP Server.
    2. Tool description (optional): A concise and clear description of the Tool's capabilities. This description is used to inform the Client (and LLM) of the purpose of the Tool.
    3. Input parameter schema: A Zod schema that defines the input parameters of the Tool.
    4. Tool logic: A callback function that contains the actual logic to be executed when the Tool is called. It takes the Input parameter schema as a function argument.
  • get-forecast tool
    • Line 51: Tool name
    • Line 52: Tool description
    • Line 53-56: Input parameter schema. Defines the schema for each of the 2 arguments (latitude, longitude). The describe function is also used for each argument to specify what each argument means, and this is passed to the LLM.
    • Line 57-131: Tool logic. Calls the NWS API based on latitude and longitude to retrieve weather forecast information for the area. The weather forecast information is appropriately parsed and returned at the very end. This return data becomes the Context provided to the LLM. In the end, this data can be considered the most important.

Tool registration is complete, and now we need to add the code to run the MCP server as the last step.

src/index.ts
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error("Weather MCP Server running on stdio");
}

main().catch((error) => {
  console.error("Fatal error in main():", error);
  process.exit(1);
});
  • The MCP server supports two transport protocols (Transport Layer) for communication with the client.
    1. Stdio (Standard Input/Output)
    2. HTTP with SSE (Server-Sent Events)
  • Line 2: This example uses Stdio.

Build - MCP server

Build the src/index.ts file through the command below.

npm run build
  • Then you can see that a new build directory is created and an index.js file is created in it.

Claude Desktop configuration

Now all the code work is complete. It's time to test our MCP server by linking it with Claude Desktop.

Claude Desktop must be installed on your local computer. If it is not installed, you can download it here.

To connect the MCP server and Claude, you need to modify the Claude App configuration file.

~/Library/Application Support/Claude/claude_desktop_config.json

Open the file at this path and add the following content.

claude_desktop_config.json
{
    "mcpServers": {
        "weather": {
            "command": "node",
            "args": [
                "/ABSOLUTE/PATH/TO/PARENT/FOLDER/weather/build/index.js"
            ]
        }
    }
}
  • Add an item called weather under the mcpServers item. This is the name of the MCP server we created above.
  • There are command and args under the weather item.
    • command: Run the weather server with node.
    • args: Location of the weather server's index.js file. You must specify the absolute path, not the relative path.

Test MCP server with Claude Desktop

Now everything is really ready. Let's run Claude Desktop. (If it was already running, you need to exit and run it again.)

Claude MCP - 1
  • After completing all the settings and running Claude Desktop, you can see that a new hammer-shaped icon has appeared in the lower right corner of the chat message input box.

Claude MCP - 2
  • Clicking on the hammer icon will show you a list of MCP Tools registered with Claude Desktop.
    • You can see the two Tools we created (get-alerts, get-forecast).
  • If you have proceeded normally up to this point, you have successfully connected the MCP server and Claude Desktop!

Claude MCP - 3
  • What's the weather in Boston?
    • I asked about the weather in Boston in the chat window.
  • Based on this user query, the LLM decides which of the available Tools to call and specifies the appropriate Tool parameters to call it.
    • The selected Tool is get-forecast and the Parameters are latitude: 42.36, longitude: -71.06.
    • The LLM knows the latitude and longitude of Boston, so it can call the get-forecast Tool and specify the appropriate Parameters.
  • The Tool logic of the get-forecast Tool calls the NWS API and returns the received weather forecast data.
    • The LLM creates the final response by 풀어쓰기 the weather forecast data returned by the Tool logic in a way that is easy for humans to read.

Let's summarize how each MCP component interacts sequentially through the sequence diagram below.

MCP Components sequence
MCP Components sequence diagram

MCP Components sequence diagram

  1. When a user interacts with the UI of the Host Application (Claude Desktop), the Host transmits the request to the MCP Server through the MCP Client.
  2. The MCP Client sends a request to the Server using the MCP Protocol, and the Server accesses external data sources (Data/API) if necessary to process the request.
  3. The Server returns the result obtained from the data source or the processing result to the Client in the form of an MCP Protocol response.
  4. The Client transmits the response back to the Host Application, and the Host Application displays or updates the final result to the user through the UI.

We have been able to learn about the weather in Boston today through MCP. This is in contrast to the result earlier in this article when I asked about the weather in Seoul and Claude said it could not respond.

How did this difference occur?

Whether appropriate Context can be provided to the LLM

This is the key. It is important to provide the following elements as Context to the LLM:

  • The name and type of available Tools
  • The appropriate situation to call the Tool
  • Specific schema required for calling
  • Final data returned through Tool logic

By comprehensively providing this Context, it makes a difference in the quality of the final answer that the LLM creates.


LLM integration: MCP from a developer's perspective

Existing LLM-based services had a one-way integration direction with LLMs. It was a method of calling the LLM API from a server developed in-house. However, through MCP, this integration direction can be made two-way. This is because the server I developed can be registered with the LLM through MCP, and the LLM can call and use the server.

Existing LLM integration

Direct LLM API Call
Direct LLM API Call
  • The place where the initial User request is received is the User interface (Front-end). The Application server API is called here
  • The Application server performs internal Business logic according to the type of request, and as a result, the LLM API Client can be called.
    • The Application server is responsible for specifying parameters (System prompt, conversation history, additional Context) when calling the LLM API Client. Therefore, from the developer's perspective, there is a cost to manage the LLM separately
  • The LLM API client calls the API provided by the LLM Provider (OpenAI, Anthropic, Google)

LLM integration with MCP

MCP integration
The application server is invoked by the LLM
  • The first place where user requests are received is the Host Application (Claude Desktop, Cursor, etc.).
    • The MCP Client inside the Host Application connects to all MCP Servers registered in the Host Application and provides the capabilities (Tools, Resources, Prompts) provided by each server to the LLM in an integrated way. When the LLM (in the case of Tools) or the user (in the case of Prompts, Resources) selects the required capability, the MCP Client transmits the request to the MCP Server that provides that capability.
  • The MCP Server receives and processes the request. In this process, it can also communicate with the Application Server.
  • The Host Application is responsible for injecting additional Context into the LLM.
    • In other words, from the developer's perspective, you only provide Context by implementing the MCP server.

Let's summarize the contents so far from the perspective of Inversion of Control:

FeatureExisting method (APP Server ➡️ LLM API)MCP method (LLM ➡️ APP Server)
Control flow subjectApplication Server (developer)Host Application (MCP Client)
API call initiativeApplication ServerHost Application
Context injection responsibilityApplication Server (developer)Host Application
Flexibility/ScalabilityLow: Requires direct modification of Application Server code to expand capabilitiesHigh: Capabilities can be flexibly expanded by adding/removing MCP Servers (capabilities)
Reusability/ModularityLow: Capabilities are tightly coupled to Application ServerHigh: MCP Servers are developed as independent modules, high reusability
Development complexityRelatively low: Control flow is simple, developer controls everything directlyRelatively high: Understanding of IoC pattern required, understanding of MCP architecture and protocol required
Debugging/MaintenanceRelatively easy: Control flow is intuitive, easy to debug and troubleshootRelatively difficult: Control flow is complex, understanding of interactions between components required

However, the MCP method is not necessarily better than the existing method, and it is possible for developers to directly call the LLM API even with the MCP, so I emphasize that this comparison cannot be applied to all LLM integrations.


Conclusion

MCP, which was announced in November 2024, is becoming more and more popular over time. It is being used in many places technically, and it is being integrated with various services such as Cursor, Docker, Puppeteer, Github, Redis, and PostgreSQL, as well as Claude.

There are web pages that collect only MCP clients/servers where you can find out how MCP is actually used:

If you look at the above registries, you can see that so many places are adopting MCP. Some of these maximize developer experience and productivity. I plan to learn more about these in detail through other blog posts.

And in this article, we only implemented and used Tools among the MCP server components of MCP. If you want to implement more about other important items of MCP, you can refer to the official MCP documentation. The official MCP documentation is quite well written, so it is a good learning method to read and understand it slowly from the beginning.

Then I will end this article here. Thank you for reading to the end!

Comments (0)

Checking login status...

No comments yet. Be the first to comment!