When I think about Wasm—and I am starting to think about Wasm a lot—I imagine it like those magic grow capsules you had as a kid: Just pour water over one of the capsules, and it expands to many times its size, in a variety of shapes and colors.
Likewise, Wasm—or WebAssembly, as it’s formally known—started out “small,” as a binary instruction format for a stack-based virtual machine, originally intended for the browser. It’s still that, but as developers “pour water on it,” I expect we will watch it grow and shape-shift in all manner of ways—not least, as a way to write applications once and deploy them on the growing number of hardware and software platforms that drive (sometimes literally) our lives.
The hardware side is seeing an especially big boom, relatively speaking. There was a time not so long ago when Intel was to hardware processors as Kleenex is to facial tissue. Today, it’s a polyglot hardware world. Arm, RISC-V, Apple M1/M2, AWS Graviton, Ampere, Fujitsu, and others have joined Intel and AMD processors as development targets for a host of new use cases, from lightbulbs to cars to enterprise applications, and especially web servers, which glue everything together in a modern world. All of these hardware architectures, not to mention the legacy stuff that’s still lurking around, will need to live side by side for many years to come.
There are also lots of different languages. Java and .NET, of course, have enabled developers to write an application once and run it anywhere, but with some overhead and limitations. With Java and .NET, there’s an implicit understanding that your entire ecosystem of software will be written in, well, Java and .NET. Although .NET has support for a few different languages, it never truly became the default, polyglot interpreter for any language. With Java and .NET there was always an understood separation between the OS, which managed polyglot processes (Python, Ruby, Perl, C, C++, etc.), and the VM, which managed processes written and managed within the software ecosystem (Java, .NET).
There have been attempts to expand the Java Virtual Machine (JVM) with JRuby, Jython, and more, but they’ve never quite caught on. The operating system has always served this purpose through standardized C libraries, which nearly every language uses, but it’s never been easy to share libraries between languages, for example Python and Ruby. Perhaps what’s always been needed is some universal binary format!?!?
Where Wasm fits in
The World Wide Web Consortium (W3C) first announced Wasm, ie, WebAssembly, back in 2015 and published it in 2018. Originally designed for use in the browser, the in-the-weeds Wasm is garnering interest as a potential barrier breaker across different hardware and software environments. The original vision for Wasm was a security tool that allowed developers to safely use compiled languages like C, C++, or Rust in the browser, while at the same time preventing the code from taking over a user’s machine.
Since then, the vision has evolved into something similar to Java or .NET, but for every language, enabling developers to compile any language into a binary that could run on any platform through Wasm interpreters, whether in the browser, on the desktop, directly on a server, at the edge, or even as a plugin framework within other pieces of software. The vision has become true portability across a wide spectrum of use cases. Though it’s not there yet.
There are certain use cases that Wasm makes perfect sense for today, including browser-based apps and plug-ins. But Wasm is starting to expand to be able to compile things like Python into a Wasm binary that can then run on a Wasm interpreter. Now you can run my Python programs because the Python interpreter is compiled into that format, and now that Python interpreter can run on any piece of hardware that has a Wasm interpreter. Furthermore, we’re seeing Wasm evolve such that the same Python program might be able to leverage libraries from other languages that run in the Wasm interpreter. We’re seeing a push to expand portability on both the hardware side and the software side simultaneously.
Right now, the most popular language used with Wasm is Rust. People are compiling Rust down into these Wasm binaries and using them all over. We’re seeing it in the aforementioned plugins, but also in very specific use cases like in Envoy, the popular network proxy, and in Krustlet, which is a program designed to replace the Kubernetes Kubelet. Instead of OCI containers, Krustlet will just run Wasm binaries.
Beyond Rust, we’re starting to see people using C, C++, Ruby, and Python, so there’s polyglot support forming on the software side, too. Furthermore, we’re also seeing tools like Podman and CRI-O evolve to use OCI containers and Wasm together, instead of replacing OCI. Normally, with OCI containers, the binaries in the container image are run directly on the kernel of the underlying container host. This has the limitation that the binary in the container must be compiled for the hardware architecture.
But crun, a container runtime commonly used by Podman and CRI-O, includes an experimental feature that detects Wasm binaries inside the container image and runs these binaries on a Wasm interpreter installed on the host. Running Wasm and OCI containers together also could provide an extra layer of security, as well as the ability to run the same containers on any number of underlying hardware architectures and operating system versions.
While there is a lot of potential to Wasm, there are still some blockers.
For one thing, language support needs to be expanded, but, as noted, APIs for more and more use cases (networking is sorely lacking), within more languages, are being added as we speak.
Perhaps a bigger issue is that Wasm, a very specific instruction set architecture, is not currently POSIX-compliant, so you can’t use it to do a lot of standard things developers have come to expect (think of things like opening a file or a network socket). The Wasm folks are adding an extra API on top called WASI that will allow Wasm to do some of these general compute things, but, until that happens, Wasm’s use will be limited.
There is a lot of discussion right now around Wasm’s potential, not to mention millions of dollars of venture capital being invested in companies working to evolve the technology. I see that as money well spent because I can imagine a time in the very near future when a developer working on a Mac, Windows, or RHEL workstation or laptop will be able to compile an application for an edge, IoT, cloud, or automotive platform, and they will do it using Wasm. Today, that workflow requires cross-compiling or emulation, which is inconvenient and/or has lower performance.
The developer doesn’t want to cross-compile the code; they just want to compile it, run it locally, test it, move it wherever, and have it just work. Wasm, theoretically, can make that happen.
Wasm also has implications for making containers even more portable and powerful.
Using crun, my colleague Giuseppe Scrivano has done two proofs of concept (PoC) showing how a Wasm binary can be run from a Docker/OCI container using Podman/crun as a stand-alone container on Linux, or using CRI-O/crun in Kubernetes. In either case, the container runtime was smart enough to detect the Wasm binary and run it with a Wasm interpreter. (At the time of this writing, crun currently supports WasmEdge, Wasmer, and Wasmtime.)
Giuseppe’s PoC demonstrates that Wasm could enable you to run the same container image on any hardware you want—or, at least, anywhere there is a Wasm interpreter. This would conveniently negate the need to compile and build different container images for, say, RISC-V, Arm, or x86. Today, that’s what we’re forced to do: If we need the binary to run on three different hardware architectures, we have to compile and build it three times, create three different container images, and push them all to a registry server. Giuseppe’s PoC shows that, with Wasm, a developer could build just once and deploy anywhere (one of the dreams we’ve always had with containers).
If we could do that, that’s literally the hybrid cloud story. Imagine a Kubernetes cluster with some RISC-V, some Arm, some Intel, all running in a bunch of different nodes. I could pull down an app and run it wherever it runs best, fastest, cheapest, or closest to the consumer…
The potential for Wasm is pretty exciting, and I think we can get there, especially if the Wasm folks can get the language and especially the POSIX issues resolved. POSIX has existed in operating systems for more than 20 years, and you can’t ignore two decades of legacy software.
The WebAssembly community is very aware of the Wasm trajectory that people see and wish for. They are working on APIs that will remove some of the blockers and extend Wasm in a way that will make it more useful for more use cases. With all of that in place, Wasm will become more of a general-purpose architecture that organizations can leverage to support and optimize the hybrid cloud model.
At Red Hat, Scott McCarty is senior principal product manager for RHEL Server, arguably the largest open source software business in the world. Focus areas include cloud, containers, workload expansion, and automation. Working closely with customers, partners, engineering teams, sales, marketing, other product teams, and even in the community, Scott combines personal experience with customer and partner feedback to enhance and tailor strategic capabilities in Red Hat Enterprise Linux.
Scott is a social media startup veteran, an e-commerce old timer, and a weathered government research technologist, with experience across a variety of companies and organizations, from seven person startups to 12,000 employee technology companies. This has culminated in a unique perspective on open source software development, delivery, and maintenance.
New Tech Forum provides a venue for exploring and emerging enterprise technology in depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to firstname.lastname@example.org.
Copyright © 2022 IDG Communications, Inc.