Share fields between the create and edit forms in Laravel

This is a simple one.

In your applications you are likely to have resources that you create and edit with forms. The create and edit forms are basically the same (with the fields being exactly the same) but work in a slightly different way.

In the create view you start with an empty form, while in the edit view you start with the form filled with the current data. But, in both views you’ll want the old input (aka the input that was just submitted) in case the validation fails and you are returned to the form.

So, if you want to write the fields only once and then share it with both views, this is a fairly clean way to do it:

If the field in the first parameter is not set, the old() helper function returns null or a value passed as the second parameter. Using an expression with the null coalescing operator as the second parameter give us the ability to still return null if neither of the previous values are set.

This works as expected in both views. In the create view, $user->email would not be set and null would be returned (and the field would be empty), in the edit view $user->email would be set and it would be returned. Except if old input exists, in that case the old input would returned.

You could also write it this way, to the same effect:

Laravel Elixir and Zurb Foundation, revisited

Edited on 25 November 2016: Check the bottom of the post for an Elixir 6 update.

Last time I wrote about using the Foundation framework in Laravel Elixir I said that the npm package didn’t install the needed dependencies and, because of that, we had to use the Bower package. Not sure if I was wrong or if that changed after I wrote it but that’s not the case now.

Not only that, but we now have a new version of the front-end framework, Foundation for Sites 6. So, I think it’s a good idea to rewrite the tutorial to replace Bootstrap with Foundation in Laravel Elixir.

0. Set up

Before we start, make sure you have Node.js, npm and gulp (as a global package) installed and working. In the root of your project you should be able to run the  npm install and gulp commands without errors.

Check the Laravel Elixir documentation for more info.

1. Install Foundation

1.1. If you haven’t yet, install the npm dependencies. Before you run the following command you may want to remove the bootstrap-sass line from the package.json file, so it doesn’t install the Bootstrap framework.

1.2. If the Bootstrap framework is already installed, you can remove it with the following command.

1.3. Install the Foundation framework.

1.4. (optional) Foundation 6 comes with a new Motion UI, a Sass library for creating flexible UI transitions and animations. Install it with the following command.

If you don’t want to use it, just remove any references to Motion UI in the following files.

2. Create sass files

Now, you’ll need two files, usually in the “resources/assets/sass” folder. These are the files that you’re going to edit.

2.1. Copy the “_settings.scss” file from “node_modules/foundation-sites/scss/settings” to “resources/assets/sass”. This file contains all the parameters of the various elements of Foundation that you can customize.

2.2. In “resources/assets/sass” create a file called “app.scss” with the following content. This is the main file, where you do all the imports / includes and write your own sass.

My advice is to comment all the includes and only uncomment the ones you need, as you need them (note that the foundation-global-styles seems to be pretty much always needed).

3. Write the Elixir tasks

Now, you have to write the Elixir tasks that will compile the SCSS to CSS and copy and concatenate the JavaScript files.

In the JavaScript task, we copy and concatenate the needed files to Foundation work properly. At a minimum, we need “jquery.js” and “foundation.core.js”. Then we add the files for the components that we’ll use (the files listed are only an example). The “start_foundation.js” file includes only the code to initialize Foundation ( $(document).foundation();).

4. Run gulp

Finally, you are ready to run gulp  or gulp watch (to run the tasks automatically every time something changes). Doing that will generate a file called “app.css” in “public/css” and a file called “all.js” in “public/js” that you can include in your view files.

Note: If you want the created files to also be minified you must add the flag --production  ( gulp --production or gulp watch --production).

Update to Elixir 6

For some reason that I can’t understand Elixir 6 doesn’t support Babel anymore. Now it only includes rollup.js and webpack. I don’t know if its possible to use any these bundlers with Foundation, but I haven’t been able to make them work so far. If you know how let me know in the comments.

I solved this issue by installing Babel and the other tools needed, making a gulp native task and then calling that task from the elixir function. This is how:

1. Install the needed tools

And require them at the top of the gulpfile.js .

2. Write the gulp task

Write a gulp task, outside the elixir function, that compiles the files with babel, concatenates them, minifies them (with uglify) and outputs the result to the desired path. Notice that the relative paths are different here, they are relative to the project root (where the gulpfile.js is).

3. Call the task from elixir

Finally, inside the elixir function, instead of calling mix.babel call the task that you wrote. The first parameter is the name of the task and the second is the path to watch for changes that should trigger the task. You may want to adjust this to your needs.

Of course, you may also call the task directly from the command line with gulp scripts .

4. Sass

The sass compilation works almost the same way as before. The only difference is that it now takes four parameters, so you should call it like this:

How to use Foundation with Laravel Elixir

Update: a new version of this tutorial is available.

When it comes to front-end frameworks, Bootstrap (formerly known as Twitter Bootstrap) is the most popular. But popular does not mean better, and I’ve always preferred Foundation. So far, to use Foundation in a Laravel project I simply did a regular installation within the assets directory. Now, with the release of Elixir, we can simplify things a bit and use only one task runner for all the tasks you want to automate (compile SCSS, concatenate files, run tests…).

Let’s see how to use Foundation in a Laravel project.

0. Set up Elixir

Before we start, ensure that you have Elixir up and running. For this, it’s necessary to have Node.js (with npm) and gulp installed. Then, just run the following command at the root of the project (where the “package.json” file is). More information on the Laravel documentation.

By default, “package.json” includes Twitter Bootstrap as a dependency. Because we will use Foundation, we can delete that line before running the previous command. To remove Twitter Bootstrap after it’s installed just run:

By this time we should be able to run, without errors, the  gulp command.

1. Install Bower

Bower is the package manager that we’ll use to install Foundation. Note that it should be installed globally (so the flag -g).

Then we have to initialize Bower. For this, create a file called bower.json (where the only required value is “name”) or run the initialization tool:

Don’t forget to add the “bower_components” folder to “.gitignore”.

According to the Foundation page on GitHub there is also a npm package, that would made it unnecessary to install Bower. But from what I saw, if we install Foundation via npm, none of the dependencies (jquery, Modernizr, etc …) are installed. Hence the preference for Bower.

2. Install Foundation

We are finally ready to install Foundation, with the following command:

After installation, we’ll need two files in the “resources/assets/sass” folder. These are the files that we’re going to edit.

2.1. Copy the “_settings.scss” file from “bower_components/foundation/scss/foundation” to “resources/assets/sass”. This file contains all the parameters of the various elements of Foundation that we can customize.

2.2. In “resources/assets/sass” create a file called “app.scss”with the following content.

This is the main file, which makes all the necessary imports and where we can write our own SCSS. Note that it’s preferable to import only the components that we are actually going to use, in order to limit the size of the CSS file that will be generated. For this, we must comment or delete the line @import "foundation";  and uncomment the required lines in the following block.

3. Write the Elixir tasks

Finally, we just have to write the Elixir of tasks that will compile the SCSS to CSS and copy some required JavaScript files.

In the Sass task, we set the ‘includePaths’ option so we can simplify all the imports in .scss files. This task will create a file “app.css” in the “public/css” that we can then use on our website.

In the JavaScript tasks, we copy and concatenate the needed files to Foundation work properly. At a minimum, we need “jquery.js” and “foundation.js”. Then we add the files for the components that we’ll use (the files listed are only an example). The “start_foundation.js” file includes only the code to initialize Foundation.

This will generate a file called “all.js” in “public/js” that we can then use.

The “modernizr.js” file is created apart because, normally, this file should be included in the head of the HTML document, while the remainder may be included at the end.

4. Conclusion

And that’s it. Now, to run, simply type gulp  to run the tasks once or gulp watch to perform the tasks whenever there’s a change in one of the files.

If we want the created files to also be minified we must add the flag “–production” ( gulp --production or gulp watch --production).

 

Solving the issue of referral spam in Google Analytics

At this time, Google Analytics has a big issue, the so-called referral spam. Especially on sites with relatively low traffic, this pretty much ruins the presented data.

Google is taking forever to solve the problem, so, for now, it has to be the user to try to solve it. Fortunately, I found this article that presents a solution.

Definitive Guide to Removing Referral Spam – Analytics Edge

The solution is simpler than you might think. Instead of individually filter out each of the referrers (which seems that doesn’t work very well), you make a filter that lets through only the traffic that originates from the site where the Analytics code is installed.

I implemented this solution today, let’s see how it works.

Change the custom posts order in WordPress

The fact that WordPress was not born as CMS still reveals itself on a regular basis. An example of this is when we want to show, with an order defined by us, a number of elements belonging to a custom post type, which, in turn, belong to a custom taxonomy (in a URL like /<taxonomy-name>/<taxonomy- term>). By default they are shown in order of creation.

This implies creating a new instance of WP_Query with all the arguments needed to get what we want.

Creating an array with the arguments is not that simple. Mainly because it’s necessary to know what arguments are, what we need, which we can ignore, etc. The simplest way to create this array is to use the work that WordPress already has done.

Just get the original query array and join it with only the arguments we to change. In this example, orderby and order. We do this with the PHP function array_merge, which concatenates a number of arrays always using the values of the last array in case of conflict (exactly what we want).

Quick and easy.

Related tip: Alternatively to set the order individually in each post editing page it’s best to use a plugin like Simple Page Ordering.

Screenshots of any size, with PhantomJS

The problem

Sometimes we need a screenshot of a web page with a different resolution of our own screen. If the resolution is lower than our screen, the thing can be done. Just resize the window to the desired size and use one of the many plugins available, although, sometimes, they aren’t very reliable if we want an accurate resolution. If the resolution is bigger, it becomes a bit more complicated. If we need to simulate a higher pixel density (aka retina), even worse.

I came across this problem when I needed to create screenshots for my portfolio. I bought one of those .psd files with computers and mobile phones of various sizes. For some reason, all these files depict Apple products, and to function properly, needed screenshots with a resolution identical to that of Macs and iPhones.

The solution

This is where PhantomJS comes in. An implementation of Webkit that can be manipulated via Javascript. It has several use cases. Among them, the one that interests us today, generating screenshots.

The script to do this is quite simple.

To get a screenshot just write this script in a file (for example screenshot.js), edit the desired values ​​and run PhamtonJS.

The scale = 2 is the value that will simulate a screen density of 2x (kind of, see notes). For a normal screenshot, the value should be 1.

Note that this is just a simple and dirty way to solve the problem. The script could be improved in several ways, including accepting parameters instead of hardcoding the values into the script.

Some questions:

  • The scale = 2 does not emulate completely a high density screen. It just doubles the size of the content, the device pixel ratio remains 1, which eliminates, for example, the possibility of generating a screenshot of a smartphone screen correctly.
  • The current version (1.9) does not support Google Web Fonts (more specifically .woff files). The next version (2.0) solves this problem. You can find a build of version 2 here.
  • The height of the viewport appears to have no influence on the height of the screenshot. But after the screenshot is generated it’s easy to cut to the correct size.

Options to write code

Since I’m on a change mood (from CakePHP to Laravel, to start writing tests for my code…) I also decided to reconsider the application I use to write code. Mainly, I wanted to decide whether to continue with a normal text editor or change to an IDE.

These were the options I considered (including my current option):

Text Editors

Sublime Text

It has been my choice (and of many people). It’s what a text editor should be. Lightweight and easy to use but at the same time with features that are almost able to compete with an IDE.

Atom

Atom, made by GitHub, is in hot pursuit of Sublime Text and is even quite possible that it will overcome it. It’s heavily inspired by Sublime Text and already rivals many features in it. But it’s still not as “polished” and, in terms of resources, is heavier (at least for now).

IDE’s

PhpStorm

Is for IDE’s what Sublime Text is for text editors. It’s more or less the default choice. Justifiably, I think. It seems that there is nothing that PhpStorm can’t do. Also seems to be the most customizable of the three I tried (Laracasts has a free course on PhpStorm that shows its full potential).

NetBeans

NetBeans was my choice during my university days. At the time, to program in Java. I don’t think much has changed since then, which is not necessarily a bad thing. The first thing that caught my attention were the code suggestions, for best practices and more readable code. For example, if we have to many nested blocks of code it suggests, as a good practice, to move that code to a new function.

Although, I noticed that it’s too strict with warnings and errors. There are even some situations where it shows an error in lines of code that have no problem, as it seems to me.

Eclipse

Eclipse turned out to be a revelation. I remember the days when it took about two days to start. This time, I installed it just to see how it was. To my surprise, perfectly normal launch time and memory usage within a fairly good level for an IDE. At first glance it seems quite comparable with the other two and that already shows what a great improvement has been made, compared to what it was.

Conclusion

Of course I like the simplicity and speed of a text editor but there is no denying that an IDE provides functionality beyond the reach of a text editor, which facilitate the writing of code and saves you a lot of time.

It’s a matter of what IDE to use. PhpStorm would be the obvious choice, is clearly the most evolved of the three I tried. But it is a paid app, and when you’re trying to start a startup via bootstrap, every Euro counts.

I’m still a little undecided between NetBeans and Eclipse. NetBeans seems to be more robust “out of the box” but Eclipse seems to be more extensible through plugins. At this point, I may be a bit more inclined to Eclipse (who knew?) but I think, just to be sure, that I will try to use each of them for an extended period of time (like 1 month) and then decide which I am most comfortable with.

Goodbye CakePHP, hello Laravel

Laravel

I’ve changed frameworks. Goodbye CakePHP, hello Laravel. I was not dissatisfied with CakePHP, but Laravel seemed like a more modern framework that invites best programming practices. And, as I decided to rewrite Balliza from scratch, it seemed like a good time to change.

It’s not a definitive goodbye to CakePHP. Actually, I want to start a small side project in CakePHP, just to keep me fluent in the framework. Just don’t know what yet.

As some of the concepts are new to me (dependency injection, inversion of control…), the first times in Laravel have given some fight. But I’m very pleased to have made ​​the change. I feel that, in recent weeks, I have evolved considerably as a programmer (maybe just my impression, but the truth is that I learned a lot).

To finish, the suggestion of two resources that have helped me a lot in the first steps:

  • Laracasts, a site with video tutorials that teaches almost everything that is needed on Laravel and beyond. Some of the videos are available for free and, for 9 dollars a month we have access to all content. In my opinion, it has been money very well spent.
  • Laravel News, a website with useful links on the framework. Subscribe to the weekly newsletter to receive all the news and articles.

Multilingual site with Polylang

Rosetta StoneI’ve been thinking for a while to make this site available in Portuguese and English. Just needed a plugin for this, but I delayed the decision because I didn’t like the two options that I knew of too much.

One of the plugins, qTranslate, is free but I don’t like how it works, putting all the translations within the same post (with custom tags). Also, it can’t keep up with WordPress development. It happened to me not being able to update WordPress because of qTranslate. Incidentally, this seems to be happening right now. The plugin is not updated since January, and according to the information on the plugin page, does not work with the latest versions of WordPress.

The other option, WPML, is, I think, one of the most used solutions for multilingual WordPress sites and a very good one, for what I read. But it’s a paid plugin and I don’t know if it would justify the investment. While it is interesting to have this site in English, it wasn’t a priority.

The fact is that with a quick search, it was easy to find some alternatives. Of these, Polylang stood out. Seems to be fairly used (300 000 downloads) and in active development (last updated a few days ago).

Once configured and all the content translated, I wanted to leave here my first impressions about the plugin.

Pros

  • Creates a post for each language and connects them as translations of the same article. This brings some advantages, in particular, each post has a unique link and isn’t necessary to indicate the language in the URL (although it is possible).
  • It automatically creates a RSS feed for each language.
  • In addition to the traditional widget, the language switcher can be added as a menu entry, which facilitates including it in the site.
  • Widgets are not actually translated but we have the option, for each of them, to only show it in a certain language. The same with the menus. All menus areas are multiplied by the number of languages​​, after which we just need to create a menu for each.
  • Various functions and filters for use in the theme, if necessary (the documentation is quite good).

Cons

  • The workflow of writing and translating can be a tedious chore. The Polylang offers the possibility, among other things, to synchronize tags and categories between translations, but they must be translated first. Which means that, after writing the original post, we have to translate tags and categories (if they are new) before creating the translations. Moreover, the media pages (pictures) must also be translated before they are available to insert into the translations.
  • It doesn’t translate slugs of custom post types. For example, the address for ‘zecipriano.com/trabalho’ is ‘zecipriano.com/en/trabalho’, instead of giving the possibility to translate to ‘zecipriano.com/work’.

Conclusion

WordPress was never intended for multilingual websites so no solution will be perfect, but, overall, I’m pleased with Polylang. Especially given that it’s a free plugin. Everything works as expected and the disadvantages turn out to be of little relevance.

Finally, note that in order to have a perfect translation is also necessary to create, if not exists yet, a translation of the theme. Using the functions gettext [__ (“string”)] and then translating, for instance, with the Localisation CodeStyling plugin.

A better development environment with Vagrant

VagrantRecently, I started using Vagrant for my local development environment.

XAMPP has fulfilled its function, so far, but we reach a point where a virtual machine as similar as possible to the production environment is required.

This is where Vagrant comes in, that in a very simple way (and perhaps a little misleading) can be described as a manager of virtual machines.

It allows, with only one file (called Vagrantfile) at the root of our project, to describe the machine we want to user and install the necessary software. This file indicates the image that will be the basis for our virtual machine, the settings and all the necessary facilities.

The base image (called base box) can be one of many available on the net. Although the main source is the Vagrant Cloud. Just enter the URL of the box in the Vagrantfile and if it is not already installed, the download is done automatically. Of course, instead of using an existing box base, we can create one from scratch,with  exactly what we need.

After this initial setup, each time we need the machine, we just need to write the ‘vagrant up’ command. Simple. Another major advantage is that the Vagrantfile may be shared between several persons (via Git, for example) so that all can work with the same virtual machine.

The official documentation is quite good and should be sufficient for most use cases.

It’s a tool that has a small initial learning curve, especially if we try to create our own base box (this tutorial helps) but it pays well after we honed all edges.