Bryan Friedman

The Evolving Technologist: Adventures of a Recovering Software Generalist

So... What Exactly Is Technical Marketing?

I've found there to be plenty of variance in the industry around job titles, so I usually don't put a ton of weight on them. I've had titles that weren't very descriptive of my actual role. I've had titles that seem to imply something that isn't even true about what I do. I've seen junior-sounding titles for people who seemed pretty senior, and senior-sounding titles for people who acted more junior.

Regardless of the names and levels, I've worked in technology for long enough to have collected several job titles that are difficult to explain at dinner parties or to family members. That's why, when I transitioned from enterprise IT into product management ten years ago, I wrote a post to help answer that dreaded question: “So... what do you do?”

Fast-forward a decade, a few roles, and a handful of technology fads later, and I’ve once again found myself in a job that even people inside tech sometimes struggle to define: technical marketing.

Despite being an important function, particularly for developer-facing products, the role of technical marketing can sometimes be confused with engineering, product, or traditional marketing. That's actually fair, though, because tech marketing does borrow elements from all three.

So how is technical marketing different?

How Technical Marketing Fits In

Admittedly, the first time I joined a marketing team, I was a bit trepidatious about it, at least initially. Isn't marketing too far removed from the technology itself? Would I ever get to talk to an engineer or even write code again?

I quickly learned that actually, technical marketing isn't so far from product management, or any of the roles I've had in my career really. When I first got a job as a product manager, I described it as a role "in the middle", like the connective tissue between customers, engineering, and the business. Technical marketing lives in that same neighborhood, just a few doors down. While product management decides what to build and why, technical marketing focuses more on why what we built matters, and more importantly, how to show it.

I guess that might sort of sound like marketing more generally. Traditional marketing, or more specifically, product marketing, is indeed (at least partially) about telling the story of why something matters — the value proposition, perhaps you've heard it called. But it's that last piece I mentioned before, the "how to show it" part, that I think is the key distinction. A technical marketer can't just say something, they have to prove how it works and build understanding. If marketing inspires people to want to learn more, technical marketing helps them actually get there.

It's sort of analogous to the sales associate and sales engineer. When a salesperson pitches something in a room full of executives, talking about what it does might be enough. But invite some architects or developers into the room, and you better have a sales engineer there to field the tougher technical questions and show them how it works.

What About Developer Relations?

Another area I’ve worked in and around is Developer Relations, which I’d describe as at least adjacent to technical marketing. Both disciplines are about building trust with a technical audience, so there’s definitely overlap.

In my experience, Developer Relations is primarily about cultivating a community of practitioners, sparking curiosity, earning credibility, and helping people succeed whether or not they ever become customers. It’s about awareness, trust, and engagement. Technical Marketing, on the other hand, focuses more on enablement and adoption, showing how the product delivers value, differentiates, and solves real problems for customers and partners. It's not a perfect delineation (like I said, there's overlap), but I guess you could say DevRel makes fans and Technical Marketing builds believers.

Truthfully, all of these areas — PM, DevRel, and Tech Marketing — sit along the same bridge between technology, communication, and empathy. But each one might put a little more emphasis on a different area: Product Management on strategy, Developer Relations on community, Technical Marketing on proof and enablement. I’ve been fortunate to work in all three, and each helped sharpen different skills from strategic clarity, to technical depth, and creative communication.

It's why I love these types of roles so much. They let me bring all sides of myself to work: the analytical and the imaginative, the engineer and the storyteller, the tech enthusiast and the theatre kid. It’s where my left brain and right brain finally get equal billing.

So...What Do You Do?

There are probably several different views on what technical marketing is and how to define it. For me, when I explain my role, I find it’s helpful to break things into two categories (the same ones I used a decade ago when I wrote about product management): what we need to know, and what we actually do.

What Technical Marketers Need to Know

Unsurprisingly the three key knowledge areas are pretty much the same as I listed for PMs.

To do this job well, we have to know the product as more than just a list of features. We learn it by using it. We dig under the hood to understand how things work, explore the user workflows, and every now and then work through a rough spot to figure out what’s really going on.

We try to stay close to the market too. That means understanding not just who competes with us, but what existing customers and potential users are actually struggling with, what’s changing in their world, and where things are headed next. Context matters as much as capability.

The best technical marketers also pay attention to when something isn’t clicking for customers (and prospects). Whether it shows up during a demo, in a training session, or in the questions we hear out in the market, those moments usually reveal a gap in how we explain the product. That insight shapes what we build next, from clearer docs to new demos and enablement. Which leads nicely into...

What Technical Marketers Do

To me, the most fun (and challenging) part of technical marketing is crafting a narrative. I'm not talking about inventing spin, though, because credibility is key with a technical audience. I mean we distill the heart of the value and figure out the most compelling way to reveal it. We make technical concepts feel relevant and even exciting.

In my day-to-day, that might look like:

  • presenting a live or recorded demo
  • recording feature walkthroughs
  • building, facilitating, and maybe even delivering training courses
  • writing technical content
  • developing competitive materials to help sales and partners position the product
  • enabling field teams with deeper technical context
  • giving live product demos at tradeshows or events (I was doing this in only my second week on the job!)
  • taking questions, confusion, or objections and turning them into clearer messaging or new content

It's a lot of learning in public, which can sometimes mean pushing through impostor syndrome to ultimately show expertise and prove the narrative.

If you’re curious what this all looks like in practice, here are a few recent examples I had a hand in:

Why I Love It

Across every role in my career — IT, product, developer relations, and now technical marketing — the theme has been consistent: translate technology into possibility and turn complexity into confidence.

What’s great is that I get to blend analytical precision with creative expression. The architecture diagrams matter, but so does the storytelling arc. The tech and the theatre kid get to show up every day. That combination is where I feel most at home.


Life Finds a Way with OpenRewrite Part 2: Code Evolution

When I last left off, I’d done the unthinkable. I resurrected my college senior project from 2003—the Help Desk Scheduler. It was running again as a Java 8 web app on Tomcat 4 with Struts 1.0 and MySQL. To continue my Jurassic Park metaphor, it was the software equivalent of a creature that shouldn’t exist anymore, but somehow came back to life.

And also because, now that I’m at Moderne, I spend my days thinking about automated code transformation with OpenRewrite. (I'm super fun at parties.) So of course I wanted to see if this ancient app could evolve enough to survive in 2025. I don't need to go full Jurassic World reboot yet, but what if we can tweak things just enough to get to The Lost World at least?

From Batch Files to Build Tools

In college, my “build system” was literally a batch file that ran javac and copied the results into Tomcat's webapp folder. To modernize anything, I first needed a real foundation. It was time to pick: Gradle or Maven?

I chose Gradle, mainly because I was less familiar with it and wanted to learn, but also because it seems to be a common choice among devs I respect. So, to get things working, I had to:

  • Restructure the directories to use src/main/java and src/main/webapp
  • Set up a build.gradle.kts file (I went with Kotlin over Groovy because I'm a follower)
  • Declare dependencies for the old Servlet API and Struts 1.0 (which required a local JAR since the old version didn't seem to exist on any repository anywhere)
  • Specify Java 8 and configure the war plugin
  • Update my Docker Compose setup to build and deploy a WAR instead
  • Add the OpenRewrite Gradle plugin since there would be recipes in my future

For the first time in two decades, HDS had an actual build pipeline. Now OpenRewrite could start working its magic.

Automation: Nature’s Next Step

With Gradle in place, I was ready to run some recipes. I wanted to take this from barely runnable on Java 8 to something that could at least sort of live in the modern Java world. I started with UpgradeToJava21, which handled compiler targets and cleaned up a few deprecated APIs.

Next came JakartaEE11, which migrated javax.*packages to jakarta.*. What could possibly go wrong at this point?

Everything. The changes were clean, but it turns out that Struts 1.0 simply wasn’t built for a Jakarta world, and the build logs made that abundantly clear. Huh. What to do?

Finding a Path Forward

First, I'd need a newer version of Tomcat. I got that up and running manually and figured I could automate it later. (Which I did...see below.)

Then, I considered trying an upgrade to Struts 2, but that honestly looked almost as hard as a full-scale rewrite. Same for moving off of Tomcat altogether to a Spring application. I hope to get there eventually, but this first step was just about some incremental change. I wanted to run Java 21 without too much manual effort, if possible. Could I automate everything with OpenRewrite and make it all work?

Rather than give up, I went hunting for a compatible solution and I stumbled upon Struts1 Reloaded, a modernized fork that aims "to bring Struts 1 to a current technology." This looked like the best route, at least for now. The latest version (1.5.0-RC2) supports more recent Jakarta namespaces. Sweet!

Using OpenRewrite dependency recipes, I swapped out the old framework for the new libraries. That meant replacing the old Servlet API with the new ones, retiring the old com.sun.mail packages in favor of new ones from Eclipse, and replacing the local Struts JAR with all new references to the Struts1 Reloaded libraries.

Still got a bunch of build errors, but far fewer. Getting closer.

A Little Genetic Engineering

The errors were mostly type and method name changes from moving to Struts 1.5. Thankfully, those trusty old OpenRewrite standards ChangeType and ChangeMethodName came to the rescue for that. Action perform() is now Action execute()? No problem. Oh, ActionError is gone in favor of ActionMessage? Easy. But ActionMessages empty() needs to be ActionMessages isEmpty()? Done. Thanks OpenRewrite!

type: specs.openrewrite.org/v1beta/recipe
name: com.bryanfriedman.hds.MigrateStruts
displayName: Struts 1.1 to 1.5 API adjustments
description: ActionError→ActionMessage, perform→execute, messages.empty()→isEmpty
recipeList:
  - org.openrewrite.java.ChangeMethodName:
      methodPattern: org.apache.struts.action.Action perform(..)
      newMethodName: execute
      matchOverrides: true
  - org.openrewrite.java.ChangeType:
      oldFullyQualifiedTypeName: org.apache.struts.action.ActionError
      newFullyQualifiedTypeName: org.apache.struts.action.ActionMessage
  - org.openrewrite.java.ChangeMethodName:
      methodPattern: org.apache.struts.action.ActionMessages empty()
      newMethodName: isEmpty
      matchOverrides: true

But now, things got a little more complicated. There were two changes that I needed to make and I couldn't find any existing recipes to do the trick. But hey, I said I wanted to learn how to write some custom recipes. This was my chance. So I wrote two imperative recipes to handle these cases:

  1. DataSource access. The old Struts ActionServlet findDataSource() helper no longer worked. They needed to be converted to use standard JNDI lookups.
  2. Method signature. The new Action execute() method in Struts 1.5 added a throws Exception declaration, meaning any overriding methods needed to also.

I had a little bit of help from Claude Code to write these recipes. (I'd had some experience doing that at work.) But still, writing these custom solutions gave me such an appreciation for how elegant and extendable OpenRewrite really is when you need that level of precision. And the test framework is so easy to use, you can see exactly what needed changing in both cases.

One more little ChangeType tweak to broaden some exception handling, and the build finally worked. Too bad the run didn't...

JSPs, XML, and Other Endangered Species

Now the app was failing to render, so I knew it was time to look to the JSPs. In fact, the Struts 1 Template tags had been retired in favor of Struts Tiles. That meant a whole host of changes to the JSP files would be required.

Although OpenRewrite doesn’t parse JSPs, I was still able to automate into my recipe by using the text-based FindAndReplace recipe to do some regex magic.

And finally, for the XML config files (web.xml, struts-config.xml, and Tomcat 11's server.xml), I used some XML recipes to make the necessary changes, and some Create*File recipes to drop in the new ones.

It's Alive... Again

After all the dust settled, the project now:

  • Builds cleanly with Gradle
  • Includes a build step within Docker Compose
  • Runs on Java 21 / Tomcat 11
  • Uses Struts 1.5 (Reloaded)

Is it modern? Not really. But it does compile cleanly, deploy reproducibly, and doesn’t require completely ancient toolchains to run. I'm quite proud that (after manually setting up Gradle) I completely automated all of the changes using only OpenRewrite. It was all refactored through deterministic, repeatable automation. No frog DNA required.

Evolution, Meet Ambition (and Chaos Theory)

There’s a point in every old-code resurrection where you remember Dr. Ian Malcolm again. Just because you can modernize something, doesn’t mean you should.

That said, though, I’m probably not done experimenting. Some next steps I’m considering:

  • Swap MySQL for Postgres. (Probably only config changes?)
  • Skip right over Struts 2 and try a Spring Boot migration, just to see how far automation can carry it.
  • Move off of JSPs to...something else?

Each attempt is both an exercise and a curiosity test: how much of this can be done through recipes rather than rewriting by hand?

Part 1 was the resurrection; this sequel is the evolution. Give it a few more transformations and I'll have my own cinematic universe.


Life Finds a Way with OpenRewrite: Resurrecting a Long-Extinct Java App

I'm sure you know this classic line from Jurassic Park:

“Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” —Dr. Ian Malcolm, Jurassic Park

That quote was about cloning dinosaurs, but honestly, it also applies pretty well to a scenario I'm now facing: resurrecting long-dead source code.

Recently, I joined the team at Moderne, where I’ve been diving into the world of automated code remediation and transformation using OpenRewrite. It’s powerful tech that can scan large codebases and apply structured, deterministic changes like it actually understands the code (because it basically does).

I've been wanting to write some custom OpenRewrite recipes as a learning exercise, and I've been looking for a good idea to try out. This got me thinking: What kind of code would be fun (and maybe a little terrifying) to experiment on?

That’s when I remembered Help Desk Scheduler, the scheduling system I built for my senior project in college circa 2003. It was powered by Struts 1, backed by MySQL 4, and built with Java 1.2 with no build tool in sight.

What could possibly go wrong?

Digging Up Fossils

The Help Desk Scheduler (HDS) was a handy tool I created while working at the Cal Poly ITS Help Desk. It was a web app that generated work schedules for staff and students based on user-defined rules. It let supervisors schedule student workers, manage shift swaps, and view schedules in various formats. It solved a real problem, and it actually worked!

Back then, frameworks like Struts 1 were still new and exciting, Tomcat 4 was the default server choice, and MySQL was all the rage. For this project, I built everything with raw javac commands in a batch file. No WAR file. No CI/CD. Just a folder full of class files and a dream.

Somehow, I had kept a copy of the source code, preserved like a mosquito in amber, complete with J2EE DNA. So I decided to see: could I bring this fossilized application back to life and use it as a playground for OpenRewrite?

Reanimating the App

Getting it running again meant digging through ancient APIs, outdated assumptions, and a build process so flaky that I can hear Dennis Nedry now: “Uh uh uh, you didn’t say the magic word!”

Here’s what it took to get it running in 2025:

  • Sticking with Tomcat 4 (v4.1.24 downloaded from the Apache archives)
  • Using the oldest "supported" version of MySQL: 8.0.13 (plus a newer JDBC connector)
  • Tweaking the database schema and JDBCRealm configuration to support MySQL 8
  • Swapping out JavaMail SMTP for MailDev (good enough for these purposes)
  • Dockerizing everything with docker-compose to make things easy to run (and portable)

After a weekend of tinkering and a few "I can't believe this is working" moments, I had the app running again in my browser, in all of its ugly, table-based, CSS-less glory.

You can check out the repo here.

But... Why?

Let’s be clear: I have no plans to offer a commercial or even production-ready app. But having a working legacy app gives me a useful, safe, and fun sandbox for experimenting with modern code transformation using OpenRewrite.

Some experiments I’m thinking about trying on this codebase:

  • Migrate from MySQL to PostgreSQL
  • Upgrade to Java 21
  • Replace Struts with Spring MVC (or Struts 2, or something else)
  • Move hardcoded config to external properties
  • Swap out the authentication layer
  • Eventually maybe even refactor into something that resembles a modern Spring Boot app

Most of these are squarely in OpenRewrite’s wheelhouse, but I can also try for some stretch goals to give me a way to explore how far automation can take things vs. where manual intervention may still play a critical role.

What’s Next

Now that I’ve got this prehistoric app running again, I plan to document my OpenRewrite experiments in future posts. I’ll explore what works and what breaks, and where automation helps or not when dealing with very old Java code.

But for now, I’m just happy that I got this dinosaur of a project running again. Not because I should, as a wise chaotician might say, but because I could. I hope the servlets don't bite!


Career Refactoring

It was 11 years ago (to the day, if you can believe that) that I started a new job after leaving my first job out of college. (Fun fact: that was also an 11 year run.) Since then, I’ve been charting my career journey in this space. It sure has been quite a ride, filled with diverse roles, inspiring leaders, and wildly different company cultures.

Most of my time has been spent inside large enterprise companies with tens of thousands of employees. In my most recent roles, I’ve even been focused on selling software and application platforms to enterprise customers, giving me a unique view from both sides of the table.

That said, I’ve also had a couple of stints in startup and startup-like environments with as few as 100 people. The contrast between those experiences and the enterprise world is stark. It reminds me of some movie quotes...

“There’s a difference between knowing the path and walking the path.” Enterprises tend to be structured. In fact, with rigid processes, strictly defined roles, and lots of layers, I’d say they are often too structured. By contrast, in startups you’ll find more fluid responsibilities with a frequent need to adapt on the fly.

“All we have to decide is what to do with the time that is given to us.” Startups move fast. Decisions happen quickly. Iterations happen faster. In a big company, on the other hand, getting anything done usually means wading through a frustrating swamp of cross-functional alignment meetings, approvals, and never-ending loops.

“The study of pressure and time.” Sure, enterprises come with an abundance of resources, but agility usually isn’t one of them. Startups might be resource-constrained, but this has a way of forcing creative thinking, building resilience, and leading to more innovative outcomes.

“Old and busted, new hotness.” Startups can build with the latest tools, trends, and tech from the ground up. Enterprises, meanwhile, are often tied to legacy systems and are forced to drag a lot of baggage along for the ride. It’s much harder to steer the ship into new waters.

While I have not spent the majority of my career in startups, I’ve loved the time that I have. I still vividly remember my first exposure to startup speed. A bug was discovered, and a fix was coded, tested, and pushed to production all within an hour. My mind was blown. That one moment taught me more than months in the enterprise, and I got to tap into skills I didn’t even know I had.

Eventually, though, I got sucked back into the enterprise machine, and I didn’t fully realize how much it had started to wear on me. The longer I stayed, the more my disillusionment grew as it chipped away at my energy and motivation to ultimately break me down.

Now, at last, I am building myself back up. I’m thrilled to say, I’m heading back into startup-land. Today, I’m joining Moderne as a Technical Marketing Lead. It checks so many boxes for me.

True Modernization. Moderne is tackling a challenge close to my heart: improving code quality and reducing technical debt at scale through automated refactoring. As a former product manager, I still have scars from punting on feature work so the team could upgrade dependencies, migrate to TypeScript, or swap logging libraries. The opportunity to improve developer productivity and enable tech stack liquidity, particularly for enterprise companies with massive code bases, is incredibly exciting.

Closer to Code. After years in infrastructure and application platforms, it feels good to get closer to where software actually gets written. I may be in a marketing role, but I’ll still get to frequently nerd out about parsers, visitor patterns, and Lossless Semantic Trees (LSTs) thanks to the magic of OpenRewrite, the open source project powering Moderne’s platform.

AI That Matters. The AI boom has been overwhelming, but Moderne isn’t just bolting on AI for buzz. They’re thoughtfully weaving it into the platform, using a hybrid approach to combine their rules-based system of deterministic recipes and balancing it with all that LLMs and machine learning brings to the table.

Broad Skill Application. I’ve always gravitated toward roles that blend technical expertise and depth with strength in soft skills like communication, collaboration, storytelling, and problem-solving. Moderne’s small and nimble team gives me the chance to wear multiple hats and contribute wherever I’m needed most.

People I Respect. I’m lucky to be joining a team full of folks I’ve admired for a while. It’s energizing to be surrounded by smart, driven people. Plus, there’s a strong Java foundation here that keeps me connected to my friends in the Spring community.

Remote First. The Moderne team is globally distributed, and I’ve been working remotely since before it was cool. While I certainly appreciate in-person meet-ups on occasion, async communication suits me just fine. I've been able to build trust through consistent delivery rather than relying on physical presence, and with today’s collaboration tools, it’s easy for remote teams to stay connected and effective.

As I step into this next chapter, I’m excited to help reshape how developers write and maintain software by making refactoring easier, faster, and smarter. Let’s go!


Automated Refactoring Meets Edge Deployment: An Exploration of OpenRewrite and EVE-OS

I know from my experience working for and with enterprise companies that keeping dozens or hundreds (or thousands!) of apps up to date is complicated. Much of my career in tech has been spent in and around the cloud-based platform and modern application development spaces in an attempt to help solve this problem for customers. But I also spent time as a product manager working directly with developers, so I’ve seen how even with automated CI/CD pipelines, modern app architectures, and robust app platforms, it ultimately comes down to effectively managing a code base and often tackling mountains of tech debt along the way. I remember having to spend precious sprint cycles on cleaning up and refactoring whole swaths of code instead of focusing on delivering features for end users.

I’ve also seen over the past many years how even the most successful moves to cloud can still lead to a lot of challenges when it comes to data migration. Plus, with the explosion of Internet-of-Things (IoT) devices, it’s getting more and more difficult to ship data off to the cloud for processing. It’s been fun to watch the trend towards edge computing to combat these obstacles, but of course, that brings its own set of challenges from a scaled management perspective. I remember working on this almost ten years ago with automated bare metal hardware deployments, but now there is even more to consider!

These are hardly solved problems, but thankfully, a few of my former colleagues have ended up at companies where they are addressing them with some very innovative solutions. In my career, I’ve been extremely lucky to meet and work with some truly smart people, and one of the perks of knowing so many sharp folks in tech is that just by following their career paths, I can keep up to date with a lot of industry trends and get exposed to technologies that are new to me. This is how I became aware of two open-source projects that I’ve recently been exploring...

OpenRewrite

OpenRewrite is an open-source tool and framework for automated code refactoring that’s designed to help developers modernize, standardize, and secure their codebases. With all the tech debt out there among enterprise teams managing large Java projects in particular, OpenRewrite was born to work with Java, with seamless integration into build tools like Gradle and Maven. But it’s now being expanded to support other languages as well.

Using built-in, community, or custom recipes, OpenRewrite makes it easy to apply any changes across an entire codebase. This includes migrating or upgrading frameworks, applying security fixes, and imposing standards of style and consistency. The OpenRewrite project is maintained by Moderne, who also offers a commercial platform version that enables automated refactoring more efficiently and at scale.

EVE (Edge Virtualization Engine)

EVE is a secure, open-source, immutable, lightweight, Linux-based operating system designed for edge deployments. It’s purpose-built to run on distributed edge compute and to provide a consistent system that works with a centralized controller to provide orchestration services and a standard API to help manage a fleet of nodes. Think about having to manage hundreds (or more!) of small-form-factor devices like Raspberry Pis, or NUCs that are running in all sorts of places across different sites.

With EVE-OS, devices can be pre-configured and shipped to remote locations to limit the need for on-site IT support. And with its Zero Trust security model, it protects against any bad actors who may easily gain access to these edge nodes that often live outside of the protection of a formal data center. Because it is hardware agnostic and supports VMs, containers, Kubernetes clusters, and virtual network functions, it also has the ability to run applications in a variety of formats. EVE-OS is developed by ZEDEDA specifically for edge computing environments and aims to solve some of these unique challenges around running services and applications on the edge. They also offer a commercial solution for more scalable orchestration, monitoring, and security.

Let’s Build Something!

There isn’t exactly an obvious intersection of interest here, but bumping into these projects independently, right around the same time, got me thinking about how I could experiment with both of them and build something that balances practical OpenRewrite usage with something deployable via EVE-OS. This is what I came up with:

  1. Write a very simple but somehow outdated Spring Boot REST app
  2. Use OpenRewrite to refactor and “modernize” it
  3. Containerize the resulting modern app
  4. Deploy it to an EVE-OS “edge node” [locally]

Of course, this only scratches the surface of the potential that these technologies have, but it turned out to be a pretty fun exercise for getting started by just dipping my toe a bit into each of these areas. In case you’re interested in getting your feet wet too, I’ve summarized the steps I took below, including a link to the code I used.

Refactoring a Simple Legacy Spring Application

As a developer, my Java knowledge is admittedly relatively surface level, but I do know enough to write a working REST controller. Here’s my simple class that just calls a basic endpoint and spits back out its JSON result:

package com.example;

import org.springframework.web.bind.annotation.*;
import org.springframework.web.client.RestTemplate;
import org.springframework.http.MediaType;

@RestController
public class HelloController {

    @RequestMapping(value = "/", method = RequestMethod.GET, produces=MediaType.APPLICATION_JSON_VALUE)
    public String hello() {
        System.out.println("Calling external service...");
        RestTemplate client = new RestTemplate();
        String response = client.getForObject("https://httpbin.org/get", String.class);
        return response;
    }
}

My Spring skills are pretty outdated, so I would say a refactor is most certainly in order. Accordingly, I figured I’d use OpenRewrite to accomplish three primary things when updating this code:

  • Use the newer dedicated @GetMapping as an alternative for @RequestMapping
  • Use the SLF4J Logger instead of the elementary System.out.println
  • Upgrade from Spring Boot 2.x to 3.x
    • I didn’t show my pom.xml file here, but I used version 2.3 and will upgrade to 3.2

There are definitely other things I could choose to update. For example, I didn’t opt to write test cases in a test class, but if I had I could also have migrated from JUnit 4 to 5. I also saw some articles that suggested updating RestTemplate to RestClient or even the asynchronous WebClient. I didn’t find any recipes for this, though I could maybe tackle writing a custom one, but I left that out of scope for now. I’m satisfied with this limited example.

Since I first learned to build Spring apps with Maven, that’s what I opted to use here (but there is support for Gradle as well). The basic Maven plugin command to run for OpenRewrite is mvn rewrite:run, but that requires defining configuration and parameters in pom.xml. I wanted to keep everything dynamic and on the command line, so I passed everything in using the -D flag to define the properties:

$ mvn -U org.openrewrite.maven:rewrite-maven-plugin:run \
      -Drewrite.exportDatatables=true \
      -Drewrite.recipeArtifactCoordinates=org.openrewrite.recipe:rewrite-spring:RELEASE \
      -Drewrite.activeRecipes=\
        org.openrewrite.java.spring.boot3.UpgradeSpringBoot_3_2,\
        org.openrewrite.java.spring.NoRequestMappingAnnotation,\
        com.example.ReplaceSystemOutWithLogger

You can see the three active recipes that I passed in to perform the tasks I outlined above. The first two are recipes straight from the OpenRewrite catalog. The last one is too, sort of, but in order to pass it the necessary configuration options, I created a rewrite.yml file in the root of the project:

type: specs.openrewrite.org/v1beta/recipe
name: com.example.ReplaceSystemOutWithLogger
recipeList:
  - org.openrewrite.java.logging.SystemOutToLogging:
      addLogger: "True"
      loggingFramework: SLF4J
      level: info

This specifies what logging framework and log level to use. The active recipe references whatever name is used here, hence com.example.ReplaceSystemOutWithLogger.

And that’s it. Running the mvn command above does the magic, fixing the pom.xml file to reference Spring Boot 3.2 and updating the controller code as follows:

package com.example;

import org.springframework.web.bind.annotation.GetMapping;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.client.RestTemplate;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.http.MediaType;

@RestController
public class HelloController {
    private static final Logger logger = LoggerFactory.getLogger(HelloController.class);

    @GetMapping(value = "/", produces=MediaType.APPLICATION_JSON_VALUE)
    public String hello() {
        logger.info("Calling external service...");
        RestTemplate client = new RestTemplate();
        String response = client.getForObject("https://httpbin.org/get", String.class);
        return response;
    }
}

Notice @GetMapping has replaced @RequestMapping and the System.out.println has been moved to use a logger instead. The code still builds and runs fine, but now it’s up-to-date!

Here’s the repository with the full set of code: https://github.com/bryanfriedman/legacy-spring-app. It has the original code in main and the updated code on the refactor branch so you can use git diff main..refactor or your favorite diff tool to compare.

Deploying the Refactored App to an EVE “Edge Node”

Now that we have a running, refactored app, let’s deploy it to “the edge.” But first, we need an EVE node. The easiest way to setup a virtual EVE node locally, it turns out, is to use a tool called Eden (clever) as a management harness for setting up and testing EVE. Eden will also help us create an open-source reference implementation of an LF-Edge API-compliant controller called Adam (also clever) which we will need to control the EVE node via its API. Eden is neat because it lets you deploy/delete/manage nodes running EVE, the Adam controller, and all the required virtual network orchestration between nodes. It also lets you execute tasks on the nodes via the controller.

To accomplish this setup, I mostly followed an EVE Tutorial that I found which was extremely helpful. It outlines the process of building and running Eden and establishing the EVE node and Adam controller. However, this tutorial was written for Linux, so I ran into a few snags that didn't work in my MacOS environment. As such, I ended up forking eden and tweaking a few minor things just to get it to work on my machine. This mostly involved getting the right qemu commands to make the environment run. You can see the specifics here in the forked repo. And of course, while the tutorial describes how to run a default nginx deployment to test things out, I obviously deployed this Spring app instead. I also discovered that I needed to specifically configure the port forwarding for the deployed pod in question in order to reach the app for testing.

Here are the slightly modified steps that I took:

Prerequisites

I installed all the following prerequisites if they weren't already installed, using brew where possible, or otherwise downloading and installing: make, qemu, go, docker, jq, git.

Prepare and Onboard EVE

  1. Start required qemu containers:
$ docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
  1. Build Eden ( used my fork as indicated above):
$ git clone https://github.com/bryanfriedman/eden.git && cd eden/
$ make clean
$ make build-tests
  1. Setup Eden configuration and prepare port 8080 for our app:
$ ./eden config add default
$ ./eden config set default --key eve.hostfwd --value '{"8080":"8080"}'
$ ./eden setup
  1. Activate Eden:
$ tcsh
$ source ~/.eden/activate.cs
  1. Check status, then onboard EVE:
$ ./eden status
$ ./eden eve onboard
$ ./eden status

Deploy the app to EVE

  1. Deploy the Spring app from Docker Hub:
$ ./eden pod deploy --name=eve_spring docker://bryanfriedman/legacy-spring-app -p 8080:80
  1. Wait for the pod to come up:
$ watch ./eden pod ps
  1. Make sure it works:
$ curl http://localhost:8080

Conclusion

After all this work, I’m not exactly an expert in automated refactoring or edge computing all of the sudden, but I do have a much better understanding of the technologies behind these concepts. While they might not seem particularly related, I can definitely see how a company might be interested in both of these paradigms as they look to modernize their apps at scale and potentially look at migrating them to run at the edge. With just these rudimentary examples, you can start to see the potential of the power they can provide at scale.