Using Database Transactions in Laravel | by Eric McWinNEr

Database Transactions help us never have situations where records are half-inserted into your database. Find out how to use them

Campaign Creators on Unsplash

In this short article, we’ll go through what database transactions are, why you’ll want to use them, and how we can make use of database transactions in Laravel. Without further ado, let’s dive straight into it!

I was working on a class to allow me import data from multiple (but related CSV files) and combine these data to create a bunch of inter-related models on the database.

While parsing about five different CSV files, I had to create about nine different sets of models that depend on each other repeatedly for every line in the different CSV files.

I noticed that while testing, there would inevitably be times where the import could fail in the middle either because of a validation error like a typo, a database error like a duplicate unique key constraint or other such issues. When such errors occur, what would happen is I’d have the database half-populated with records.

It wasn’t a problem on my local machine because I could just do a php artisan migrate:fresh but if it failed on the server (even on the staging server), that’s a serious calamity. Imagine if the CSV files had over 1000 lines and the code fails halfway on the 589th row, how do we manually start rolling back the records?

I needed a way to make the parse class atomic — it should either fail completely or pass completely (a zero or one situation), with no in-betweens or half-passes.

I went back to the drawing board and my first thought was to create a separate table, track_parses that keeps track of all csv import attempts. Each row would represent a parsing session and it would store a JSON serialized object that would keep track of all records I successfully created for each parsing session.

With this method, I’ll create a track_parses object at the beginning of each parsing session and update the object as I create any records. I’ll wrap the whole code in a try-catch block and if the code fails half-way, I can just traverse the object and delete all records I created before the function crashes.
When the function succeeds, I serialize the object to JSON and persist it to the track_parses table and I can have a record of all records each parsing session created and still rollback any parsing session if I wanted to. This would give me an atomic parsing algorithm.

I was about to implement this before I came across Database Transactions.

Let’s talk a bit about what database transactions are in the first place. A database transaction is a set of operations or queries you make to the database that it combines and treats as one container of work. This means you can send a bunch of queries and the database would treat all these queries as one action.

You can then rollback this action — and this would undo all the queries in that container. You could also commit this action — and this would then actually persist all those actions to the storage. I understand transactions as storing all your changes in memory and then actually writing them to a disc when you commit the changes or deleting them from memory when you rollback the changes.

Database transactions are available in most SQL databases although it varies in how robust and it is and how it’s implemented. The big guys, however — are MySQL, PostgreSQL, SQLite, Oracle, and SQL Server support transactions so you shouldn’t have any issues implementing them with your favorite SQL database.

With this explanation, you must have seen how transactions helped me save a lot of code, all I had to do was fit my entire parse logic into a database transaction, wrap in a try-catch block and then commit the transaction if the code works or rollback the transaction in the catch block.

While this didn’t allow keep a record of the added records for each parse session or let me roll back after successful import, it was a worthwhile trade-off because they weren’t required features, and it saved me time to work on other features, and stuff. Also, note that the transactions didn’t prevent me from keeping the records if I needed to — it wasn’t an either-or situation, I could also have added the track_parses object in addition to database transactions.

Using database transactions in Laravel is extremely easy. It takes a very few lines of code to use. There are two ways to use transactions:

Closures

Laravel provides a closure on the DB facade that you can wrap your entire code in. This is what the code looks like:

As you can see here, we can wrap the entirety of our code in the closure of the transaction method and Laravel would automatically handle the transaction for us.

This method is good for simple queries like this but I personally don’t like using this method for more complicated code for one simple reason — the closure has a different scope. This means I have to attach dependencies the closure needs with the use() construct and that’s a lot of work as the dependencies grow.

Transaction Methods — beginTransaction(), commit(), rollBack()

Laravel provides three static methods on the DB facade that allows us to take full control over how the transactions are handled. This method gives more flexibility and allows us define exactly when the transaction should be committed or rolled back.

Here’s how we use these methods using the same example:

As you can see here, we get more control of when we want to rollback. We also don’t have to use any use() constructs for all the variables we use.

NB — Please note this is a very basic snippet showing database transactions, in a real app we would do things with more caution, like validate and sanitize inputs, ensure we don’t have unnecessary fields in our $fillable array, etc.

If you’re using MySQL and you tried this code and it didn’t work, it’s probably because your database is using the MyISAM engine. MySQL basically has two engines it uses for managing its tables, MyISAM and InnoDB. InnoDB is modern, faster than MyISAM, and supports transactions and foreign keys. Generally speaking, it’s a better option than MyISAM. You can check this article for a simple comparison of both engines.

Based on this, I expect that MySQL databases would just use InnoDB as the default engine for tables but unfortunately, that isn’t always the case. When creating migrations, you can specify the engine you want to use for a table in your migrations file. It’d look like this:

You can also automatically change this setting for all future tables by updating the engine value in your database config file in the config/database.php file. After doing this, your config file would look like this excerpt:

If you realize you’ve already created your database tables and they use the MyISAM engine (and like me, you’re working on a database with over 50 tables), you can easily create a migration to automatically convert all the tables in your database to your preferred engine. The migration would look like this:

Leave a Comment