Slow Loading Blogs

6 min read

Why post pages can be slow (possible causes)

Our blog application reads markdown files from the local posts directory, extracts metadata using gray-matter, and converts the markdown content to HTML using remark and remark-html. While this approach is elegant and simple, it can introduce performance issues when scaled or deployed in certain environments. The slowness you notice is not caused by one single factor, but by the combined effects of blocking file operations, runtime markdown processing, and server rendering overhead.

The first major factor is file I/O and synchronous operations. In getAllPosts and getPost functions, you use fs.readFileSync() which is a blocking call. This means that when the server reads a file, it pauses other tasks until the reading is complete. This approach works fine for a few files but becomes inefficient as the number of posts increases. Every new request triggers multiple file reads, especially when you list all posts on the homepage. Since Node.js is single-threaded, these blocking calls increase response time and can make the app feel slow when loading or switching pages.

Another significant cause of delay is the markdown-to-HTML conversion performed by remark at runtime. Markdown processing is CPU-intensive, especially when combined with plugins like remark-html or rehype. Each time a user opens a new post, the server parses and converts the markdown into HTML again, which can easily add 200–500 milliseconds to the response time for longer articles. On serverless platforms like Vercel or Netlify, where each function runs in isolation, this process happens fresh on every cold start, increasing the time to first byte (TTFB).

While you use a simple in-memory cache to store posts, it only persists for the lifetime of the server instance. This means that in a serverless or horizontally scaled setup, the cache resets whenever the server restarts or scales up. As a result, every fresh instance re-reads files and re-processes markdown, negating the performance benefit of caching. Furthermore, reading and sorting all posts for the homepage every time getAllPosts() runs compounds the latency, since each read involves parsing YAML frontmatter, computing timestamps, and then sorting by date and time before returning results.

Another layer of slowdown comes from how Next.js rendering is configured. If our app is using server-side rendering (SSR), the markdown parsing and HTML generation happen on each request. This ensures up-to-date data but significantly increases latency compared to static site generation (SSG) or incremental static regeneration (ISR), where pages are pre-rendered at build time or periodically revalidated. SSR is useful for dynamic content, but since blog posts are static files, pre-rendering them once at build time would dramatically reduce runtime work and speed up loading times.

Additionally, the remark-html conversion and date formatting functions add small but cumulative costs. Using toLocaleDateString() on every render introduces overhead because of locale lookups. This may not seem like much, but when rendering lists of posts, it can add measurable delay. The same applies to the sorting function that parses and compares timestamps for each post every time getAllPosts() runs.

The size and complexity of the markdown files also influence performance. Longer posts or files with many embedded images, code blocks, or rich markdown syntax increase processing time. Each additional plugin, such as syntax highlighters or remark extensions, adds more parsing passes. If you rely heavily on these, the CPU work grows quickly. When every user request triggers this heavy computation, response time increases significantly.

Another subtle but important contributor is the lack of asset optimization. When markdown files contain direct <img> tags or unoptimized image URLs, the browser must load these large assets directly, often without caching or responsive handling. This slows down page rendering and increases layout shift, especially on mobile. Leveraging Next.js’s built-in <Image /> component or a rehype plugin to optimize images would improve the loading experience.

Finally, rendering and hydration on the client side also affect perceived performance. Even if the server returns HTML quickly, the browser still needs to parse, paint, and hydrate the React tree. If all components are treated as client components, the browser must download more JavaScript before the page becomes interactive. Splitting the app so that post pages are server components and only the interactive features (like search or navigation) are client components will improve load and hydration times.

Overall, the slowness in post pages stems from a mix of blocking disk reads, CPU-bound markdown processing, lack of persistent caching, and server-side rendering overhead. The most effective solutions are to move file reading and markdown conversion to build time using static generation, use asynchronous file operations, and implement smarter caching. Combining these improvements with static asset optimization and selective hydration can make the blog feel instant, even with hundreds of posts.

In short, the issue isn’t with the UI itself—it’s with when and how the data is fetched and processed. By handling markdown at build time and serving pre-rendered HTML instead of converting on every request, you eliminate the heavy processing steps that slow down page loads. The end result is faster response times, smoother navigation, and a much more efficient and scalable architecture.