Rule of Thumb – Linking Order

Those newer to programming in C++ often fail to understand that compilers like GCC or Clang require a specific link order for the libraries in use or they don’t know what order to pick. Working on SFML and helping out in its community, I’ve had the pleasure to help people fix their linker errors many times and every now and then I’d explain a short rule of thumb one can keep in mind when specifying libraries to be linked. As such I wanted to share it here with you too.

Continue Reading “Rule of Thumb – Linking Order”

How to use MinGW-w64 with CLion

Update 29.09.2014: As pointed out by Anastasia Kazakova in the comments, CLion’s EAP has received an update which added native support for MinGW-w64!

In the past few days, I’ve seen CLion mentioned on multiple locations including the SFML forum. CLion is a new IDE developed by JetBrains for CMake based projects, it supports multiple languages and can be further extended with plugins. Some might recognize the interface since it’s very similar to JetBrains’ popular Java editor IntelliJ IDEA.

CLion

Personally I haven’t really looked at the IDE in-depth, since JetBrains is the creator of the allegedly awesome ReSharper tool for Visual Studio – which I’ve never used – the IDE itself should provide some pretty decent refactoring capabilities. Last I checked out Qt Creator, I was a bit disappointed at the sluggish CMake integration; this at least seems to work a lot better with CLion. However maybe Qt Creator upped their game since then as well.

This post however shouldn’t really be about introducing/promoting CLion and maybe some or many of you who somehow find this post already know what CLion is, instead this should actually be a short tutorial on how to use a compiler of the MinGW-w64 family with CLion.

Tricking CLion

For whatever reasons the developers at CLion apparently didn’t get the memo, that the original MinGW has been relatively abandoned and most of the user base has moved on to the MinGW-w64 project, which originally was created to develop a 64-bit compiler, but now supports both architectures. As such CLion currently only officially supports the vanilla MinGW. As a MinGW user for many years, the fact that CLion only wanted to recognize the original MinGW as compiler seemed rather odd, since the differences in using either a MinGW or a MinGW-w64 version nearly doesn’t exist. When I ran ProcMon and tracked what in the filesystem CLion accessed, I noticed that it checks for the existence of include/_mingw.h. By providing such a file in your MinGW-w64 directory structure you can trick the compiler into accepting it as MinGW compiler.

How To

Here are the few steps to make it work:

  1. Get your MinGW-w64 compiler installed somewhere.
  2. Create a file with the path [MinGW-w64 dir]/include/_mingw.h and the content shown below.
  3. Point CLion to your [MinGW-w64 dir] and watch how CLion recognizes your MinGW setup.

_mingw.h

#ifndef __MINGW_H

#define __MINGW_H

#define __MINGW32_VERSION           3.20
#define __MINGW32_MAJOR_VERSION     3
#define __MINGW32_MINOR_VERSION     20
#define __MINGW32_PATCHLEVEL        0

#endif /* __MINGW_H */

Disclaimer: CLion is still in an Early Access Program and may change any day. This trick worked for me, but there’s still a chance that it might not work for you.

To end this write up, here’s the introduction video from JetBrains on CLion – it was posted on their blog.

Building = Preprocessing + Compiling + Linking

After answering the same questions over and over and over and over again especially about compiling and linking, I thought it might be time to write down a few things, that I and maybe others could point to instead of keep repeating ourselves in the future. Given the wideness topic I’ll be writing things down in multiple parts. In this first post I’ll be explaining a few basic terms and their practical applications, the next one will go more into details on how linking against external libraries work and in the third and last part I’ll be writing down answers to some common issues. I hope this will turn out as a timesaver for all parties. – As a disclaimer I should add, that I’m a human being that makes mistakes and can have wrong models in their mind, thus if I make a mistake feel free to point them out.

Part I - Building, Compiling, Linking

Part I – Terms and Explanations

Although this should essentially be covered in any basic programming book, even the bad ones, here we go with a few “definitions”:

Application

Looking at it in a very abstract way, an application is no different than anything else on your hard disk, because it’s just ones and zeroes, the important part however is how these numbers get interpreted. Your CPU (the heart & brain of your PC) doesn’t understand the ones and zeroes from a music file, but if you give it the data of an application it can translate them into actions that will do what the programmer intended to (hopefully).

Building

To get such an application, your C++ code has to be turned into ones and zeroes, also known as machine code. The full processes of doing so is called building and is as such more an abstract term, because the process is essentially a composition of mainly three sub-processes: Preprocessing, compiling and linking.

Header File (*.hpp / *.h)

If you’ve done your job right, i.e. followed the correct way of writing code, then you should have all the declarations of functions, classes, structs, enumerates, templates, etc. in header files. With the help of a header file, the compiler will know how much stack and memory space is needed for the machine code that is about to be generated.

Source File (*.cpp)

The source file holds most of the implementation details, also known as code definitions. It’s where the behavior of the application itself is crated, meaning how the declared classes and functions interact with each other.

Inline File (*.inl)

This is not always needed and essentially optional, but it’s a common and good practice to put code definitions, that are required to be in the header file, into an external file and include it at the needed position in the header file. The most common use of inline files is for template definitions, which need to be in the header given that they get executed at compile time and can influence the memory layout and footprint.

Preprocessing

Before your code gets translated into anything the preprocessor will kick-in. All the statements that start with a number sign are actually preprocessor directives and get resolved in the first step of building an application, for example all the #include will get replaced with the actual content of the included header file or for any #define X Z all X will get replaced with Z or any kind of macro will get executed.

Compiling

Now that one source file will hold all required declarations the compilation can begin. Compiling is one of the terms that confuses a lot of people since it’s often used synonymously to building, which in turn isn’t too surprising given that compiling is the most crucial part, since it actually translates all the source into machine code. But you don’t end up with an application itself, but instead you end up with an object file for each source file.

Object File (*.o / *.obj)

Object files contain the translated source code in machine code. While the CPU essentially would understand the commands of the binary data, it would not know where to start exactly and miss other referenced code pieces. Thus these object files are only the building blocks used by the linker to generate the final executable.

Linking

As pointed out above the linker will be combining the object files generate by the compiler. It’s called linking because it obviously doesn’t just append or prepend the files, but actually makes “smart” links between parts of the object files – for details you might want to refer to a linker manual or other sources. But the linker doesn’t just link object files, it also links in libraries and it’s here where the difference between static and dynamic libraries are important, but I’ll go into more details in the next part, for now I’ll just say that static libraries are actually just archives of object files and thus can be linked in directly, while for dynamic libraries one just specify the interface and the “connection” between the application and the external code will be made at runtime.

Integrated Development Environments

This essentially doesn’t have anything to do with building applications, but it’s exactly that point that gets confused often. Integrated Development Environments short IDE are simply a set of tools created to work nicely (integrated) together and assist the programmer in creating code. A compiler and a linker can be considered part of this tool set, but it’s not the IDE that builds anything. Some of the most common IDEs would be Visual Studio, Code::Blocks, Qt Creator and Xcode. However one exception remains and that is, the Visual Studio compiler doesn’t really have a name of its own. Sometimes one refers to it as MSVC (Microsoft Visual C++), which in turn could also mean the IDE itself. What’s certain is, that one should not use “I built this game with Code::Blocks”, but instead “I built this game with MinGW” or “GCC”.

Final Thoughts

Having reached the end of the first part, I hope this basic information will be of some help to new beginners and depending on the response or future experience, I might add one or two paragraphs. The second part already has some words written for, so stay tuned!

Using sf::View

I’ve been using SFML now for more than a year but I never really understood how sf::View really works, until now. So I feel like sharing this enlightenment and I’ll also create a tutorial in the wiki section on GitHub.

What can a sf::View do?

This question is not that complicated to answer and I would’ve already been able to do so a few weeks back but I never really understood how sf::View works. The answer would be a description, so here you go:

A sf::View is basically a 2D camera i.e. you can move freely in two dimensions, you can rotate the whole scene clockwise or counterclockwise and you can zoom in and out but there’s no tilting or panning. Furthermore zooming really means enlarging the existing picture rather than closing in on something. In short terms there’s no 3D interaction.

Examples

initial

So if you now want to move a sprite around without altering its coordinates but instead moving the 2D camera, you could do this easily by calling:

view.move(360.f, 360.f);

move

Next you maybe want to get a closer look at something, so you can use:

view.zoom(0.1f);

zoom

Or you want to rotate everything:

view.rotate(20.f);

rotate

Now you get the idea what the camera can do and you’d probably be even able to program something with it, but do you understand how it works? Do you know what the center of the sf::View is?

So let’s get a bit more technical.

How does sf::View work?

So lets see what the documentation of SFML says about this:

A view is composed of a source rectangle, which defines what part of the 2D scene is shown, and a target viewport, which defines where the contents of the source rectangle will be displayed on the render target (window or texture).

The viewport allows to map the scene to a custom part of the render target, and can be used for split-screen or for displaying a minimap, for example. If the source rectangle has not the same size as the viewport, its contents will be stretched to fit in.

If you’ve understood everything then congrats! I didn’t (at first) and although it’s a good, short and precise description it’s not very intuitive.

From the text above we can extract that there are two different rectangle defining the sf::View: a source and a viewport. What we also have, although not ‘physically’, is the render coordinate system, i.e. the coordinates you use to draw sprites etc.

At first I’ll explain how the source rectangle and the render coordinate system work together, then talk about the size and the constructor, furthermore show you how to use the viewport to create different layouts like a split-screen or a mini-map, as suggest in the description and at the end get a bit away from the direct manipulation of the sf::View and look at the convertCoords(…) function of a sf::RenderTarget.

At the bottom of the post you’ll find a ZIP file which holds a fully working example, demonstrating everything you’ll learn in this post.

The source rectangle

Since I don’t want to talk about the viewport yet but I’ll let it have its defaults value, which means that it will cover the whole window i.e. the whole scene will be rendered 1:1 onto the window.

In the example above I’ve already moved the view to the point, where I’m rendering the separate sprite of Link. Link’s position in pixel is (1056, 640) but to get him centered, respectively the top left corner of him, I can’t just use what might seem intuitively:

view.setCenter(1056, 640);

(Of course you can set the view center to (0, 0) and then move to to (1056, 640) but we’re trying to understand what the center of the view is.)

Since I’m assuming that you’ve already some knowledge with SFML, I’ll also assume that you know SFML is using a Cartesian coordinate system with a flipped Y axis. So the point (0, 0) can normally be found in the top left corner and then maximum x and y value can be found in the bottom right corner. If you’re working with images you’ll soon get comfortable with this system.

But now comes the center of view, which is in fact defined from the middle point of the display area. The view uses a Cartesian coordinate system too but this time the X axis gets flipped. In the end we have the point (0, 0) (also known as origin) in middle of the view, the maximum x and y values are in the top left corner and the minimal and negative x and y values are in the bottom right corner. What you’re now actually can set with setCenter is the where origin of the rendering coordinate system is put in the coordinate system of the view. I hope the following images will clarify the situation a bit.

The black square should represent the window and since we set the viewport to a fixed value the origin of the sf::View coordinate system will always be in the middle of the window. Here are the two separated coordinate systems:

coord-render

coord-view

If we combine them we get for example something like:

coord-comb

Then we have our two operations rotate() and zoom(). The last picture combines the two transformation:

coord-comb-rot

coord-comb-zoom

coord-comb-zoom-rot

I think it’s nearly impossible to explain this even better.

Note that in the images above, I’ve moved the rendering coordinate system around to get a few different variations. I guess most of the time it’s better to use the move() function since you won’t need to deal with where to place the origin point. On the other hand you really need to understand how sf::View works to use rotate() and zoom() because it’s not obvious that those transformation will happen around the sf::View origin i.e. the middle of the view.

You should keep in mind the introduced concept doesn’t just hold for a window it’ll work with any render target including sf::RenderTexture.

The size and the constructor

Before we take a closer look at the viewport we need to understand what the size of the view is and which parameters the constructor takes.

For now we’ve used the same size for the view as for the render target, this gives us a 1:1 projection. But what if we’d divide the size of both sides by two? From the observer perspective such a change would equal to using view.zoom(0.5f) but from the programmer perspective it’s something completely different. As we’ll learn in the next paragraph we can use the viewport to map the rendering stuff to a certain area on the window. Now if’ we’d apply the scene with the same size as before everything would get shrinked and eventually stretched if the ratio of the side isn’t the same anymore. This can create some wanted effects but mostly it would defeat the purpose. With setting a specific size, we’re telling the sf::View how big the view should get drawn while not making any visible transformations on the rendering part.

With that in mind it’s now easy to understand how to use the constructor.

sf::View view(origin.x, origin.y, size.width, size.height);

Where origin relates to a 2D vector of the origin in the rendering coordinate system and size relates to the sf::View size as discussed above.

The viewport rectangle

After we’ve seen most of the things sf::View can do, we want to generalize this even further. For now we’ve assumed we were using a window and rendered 1:1 onto it’s surface, but what if we wanted to display my stuff only on the lower right corner or use only the half left side? That’s where the viewport comes in.

The default values are sf::FloatRect(0, 0, 1, 1) which means the view covers 100% of the render target. So we got another rectangle which holds percentage values of the size of the render target. The first two values describe the position of the upper left corner and the last two values the lower right corner.

If you want to split the screen you’d need two different views for the left and the right side. Before you then draw to one side you’ll have to change the view on the render target first. To draw something to the other side, just set the second view and you’re good to go. This could look like this:

sf::View viewLeft(sf::FloatRect(0, 0, window.getSize().x/2, window.getSize().y));
viewLeft.setViewport(sf::FloatRect(0, 0, 0.5, 1));
sf::View viewRight(sf::FloatRect(0, 0, window.getSize().x/2, window.getSize().y));
viewRight.setViewport(sf::FloatRect(0.5, 0, 0.5, 1));
// ...
window.setView(viewLeft);
window.draw(leftSprite);
window.setView(viewRight);
window.draw(rightSprite);

With all the knowledge provided above you should now have a good understanding of why we divide the window width by two or why viewRight is defined with 0.5 as first argument.

split

Another example I promised to show is how to create a mini-map i.e. a scaled down overview of the whole map. In practice it’s often better to construct it’s own mini-map rather than just scaling down the original one, since the quality can get fairly poor. But let us first test how it will really look like.

If you’re working with more dynamic data you’d probably need to render everything to a sf::RenderTexture and then extract the texture and render it twice. In this example we ignore those facts and just assume we’ve got a sprite with the Zelda map texture thus we can reduce the code to a few lines.

sf::View standard = window.getView();
unsigned int size = 100;
sf::View minimap(sf::FloatRect(standard.getCenter().x, standard.getCenter().y, size, window.getSize().y*size/window.getSize().x));
minimap.setViewport(sf::FloatRect(1.f-(1.f*minimap.getSize().x)/window.getSize().x-0.02f, 1.f-(1.f*minimap.getSize().y)/window.getSize().y-0.02f, (1.f*minimap.getSize().x)/window.getSize().x, (1.f*minimap.getSize().y)/window.getSize().y));
minimap.zoom(4.f);
// ...
window.setView(standard);
window.draw(map);
window.setView(minimap);
window.draw(map);

minimap

The convertCoords(…) function

As we’ve seen in the paragraph with the source rectangle it’s not very intuitive to work with those two coordinate system. It’s also pretty hard to determine by hand at which position for instance your mouse cursor is located relative to the view underneath. But why would you want to do this by hand, since SFML has already a build in function that will do the heavy lifting for you, namely convertCoords(…)?

There are two ways to use this function:

  1. Determine the relative position on the render target of the point P to the current view of the render target.
  2. Determine the relative position on the render target of the point P to a specified view.

This function can be very useful, e.g. you’ve moved around your view and now the user click on a point on the window. Just call convertCoords(…) and you’ll instantly know where on the moved view he clicked.

Complete demonstration

I’ve created a small application which packs up every introduced concept. The code isn’t the prettiest but contains a lot of more or less useful comments. I also provide a package that contains just the images presented in this blog post and since maybe some people want both, I’ve created one that contains everything.

If you have any questions/suggestions or found any mistakes/bugs/errors just let me know in the comment section!

Tutorial: How to change your cursor?

cursorI’ve just published my first tutorial with SFML, okay it’s actually not fully my tutorial since I’ve partially rewritten one from the old section, but expanded it with in my opinion a better solution.
Although you can find the tutorial on the wiki site of SFML, I post it here again.


How to change your cursor?

The cursor is something every computer user is familiar with and in fact is constantly staring at, yet many people don’t even realize anymore that it’s there and changes its shape every so often. Next to the functionality like pointing and clicking the cursor can show many different states and indicate possible actions. For example you’ll get a selection cursor when hovering over a text or a hand shaped cursor could indicate a link etc.

Since SFML is not a framework nor a GUI system providing a native function for changing the mouse cursor doesn’t fit its purpose. Here’s where this tutorial comes in. If you’re making an application or game, you might want to be able to display a different cursor. There are two ways to change your cursor:

  1. You can hide the default cursor and draw a sprite where the cursor should be
  2. You can ask your OS to do it for you. (Windows & Linux)

This tutorial will cover both methods.

What do you need?

You’ll need SFML, an editor and a compiler (obviously these links are only suggestions).
Since I won’t go into details regarding C++ or SFML, the tutorial requires you to have some basic knowledge on both topics.

Hide and Draw

This task is fairly simple, it consists out of three required and one optional tasks:

  1. Use sf::RenderWindow::setMouseCursorVisible(bool) to hide the cursor.
  2. Set the position of the sprite to the position of the mouse.
  3. (optional) Adjust the view to get the correct render position.
  4. Draw the sprite to the screen.

Since it’s so simple there’s not more to talk about and an example shows a possible implementation:

#include <SFML/Graphics.hpp>

int main()
{
    sf::RenderWindow window(sf::VideoMode(800, 600), "Hidden Cursor");
    window.setMouseCursorVisible(false); // Hide cursor

    sf::View fixed = window.getView(); // Create a fixed view

    // Load image and create sprite
    sf::Texture texture;
    texture.loadFromFile("cursor.png");
    sf::Sprite sprite(texture);

    while(window.isOpen())
    {
        sf::Event event;
        while(window.pollEvent(event))
        {
            if(event.type == sf::Event::Closed)
            {
                window.close();
            }
        }

        sprite.setPosition(static_cast<sf::Vector2f>(sf::Mouse::getPosition(window))); // Set position

        window.clear();
        window.setView(fixed);
        window.draw(sprite);
        window.display();
    }

    return EXIT_SUCCESS;
}

Ask your Operation System (OS)

What are the advantages of using this method over the other?

  • The OS will not duplicate the cursor in memory, because it is a shared resource.
  • The OS will use a cursor that the user is used to. For example on windows 7 the user is not used to having an hour glass, but is familiar with some sort of an animated circle.

Class Prototype

Since we have to store the cursor differently for the Linux as for the Windows platform, we’ve introduced an preprocessor switch for the include files and the private class members.

#ifndef STANDARDCURSOR_HPP
#define STANDARDCURSOR_HPP

#include <SFML/System.hpp>
#include <SFML/Window.hpp>

#ifdef SFML_SYSTEM_WINDOWS
    #include <windows.h>
#elif defined(SFML_SYSTEM_LINUX)
    #include <X11/cursorfont.h>
    #include <X11/Xlib.h>
#else
    #error This OS is not yet supported by the cursor library.
#endif

namespace sf
{
    class StandardCursor
    {
    private:
		#ifdef SFML_SYSTEM_WINDOWS

		HCURSOR Cursor; /*Type of the Cursor with Windows*/

		#else

        XID Cursor;
        Display* display;

		#endif
    public:
        enum TYPE{ WAIT, TEXT, NORMAL, HAND /*,...*/ };
        StandardCursor(const TYPE t);
        void set(const sf::WindowHandle& aWindowHandle) const;
        ~StandardCursor();
    };
}

#endif // STANDARDCURSOR_HPP

Class Implementation

Instead of ‘breaking’ the scope with preprocessor statements we’ll define every function twice with a switch for each platform.

#include "StandardCursor.hpp"

#ifdef SFML_SYSTEM_WINDOWS

sf::StandardCursor::StandardCursor(const sf::StandardCursor::TYPE t)
{
    switch(t)
    {
        case sf::StandardCursor::WAIT :
            Cursor = LoadCursor(NULL, IDC_WAIT);
        break;
        case sf::StandardCursor::HAND :
            Cursor = LoadCursor(NULL, IDC_HAND);
        break;
        case sf::StandardCursor::NORMAL :
            Cursor = LoadCursor(NULL, IDC_ARROW);
        break;
        case sf::StandardCursor::TEXT :
            Cursor = LoadCursor(NULL, IDC_IBEAM);
        break;
        //For more cursor options on Windows go here: http://msdn.microsoft.com/en-us/library/ms648391%28v=vs.85%29.aspx
    }
}

void sf::StandardCursor::set(const sf::WindowHandle& aWindowHandle) const
{
	SetClassLongPtr(aWindowHandle, GCLP_HCURSOR, reinterpret_cast<LONG_PTR>(Cursor));
}

sf::StandardCursor::~StandardCursor()
{
	// Nothing to do for destructor : no memory has been allocated (shared ressource principle)
}

#elif defined(SFML_SYSTEM_LINUX)

sf::StandardCursor::StandardCursor(const sf::StandardCursor::TYPE t)
{
    display = XOpenDisplay(NULL);

    switch(t)
    {
        case sf::StandardCursor::WAIT:
            Cursor = XCreateFontCursor(display, XC_watch);
        break;
        case sf::StandardCursor::HAND:
            Cursor = XCreateFontCursor(display, XC_hand1);
        break;
        case sf::StandardCursor::NORMAL:
            Cursor = XCreateFontCursor(display, XC_left_ptr);
        break;
        case sf::StandardCursor::TEXT:
            Cursor = XCreateFontCursor(display, XC_xterm);
        break;
        // For more cursor options on Linux go here: http://www.tronche.com/gui/x/xlib/appendix/b/
    }
}

void sf::StandardCursor::set(const sf::WindowHandle& aWindowHandle) const
{
    XDefineCursor(display, aWindowHandle, Cursor);
    XFlush(display);
}

sf::StandardCursor::~StandardCursor()
{
    XFreeCursor(display, Cursor);
    delete display;
    display = NULL;
}

#else
    #error This OS is not yet supported by the cursor library.
#endif

Cursor Demonstration

This section presents a fully functional demonstration of both cursor changing possibilities.

To get an handle to the window we will use SFML’s sf::Window::getSystemHandle() function and then we will load the cursor with the OS specific implementation.

#include <iostream>
#include <SFML/Graphics.hpp>
#include "StandardCursor.hpp"

int main()
{
	int choice = 0;
	while(choice != 1 && choice != 2)
	{
		std::cout << "t1. Hide the cursor and draw your own." << std::endl;
		std::cout << "t2. Let the OS handle the cursor." << std::endl;
		std::cout << "Choose your cursor behaviour: ";
		std::cin >> choice;
	}

    sf::RenderWindow window(sf::VideoMode(800, 600), "Cursor Demonstration");

	if(choice == 2)
    {
		sf::StandardCursor Cursor(sf::StandardCursor::HAND);
		Cursor.set(window.getSystemHandle());
	}
	else
		window.setMouseCursorVisible(false);

	sf::View fixed = window.getView();
	sf::Texture texture;
	if(!texture.loadFromFile("cursor.png"))
		return EXIT_FAILURE;
	sf::Sprite sprite(texture);

    while(window.isOpen())
    {
		sf::Event event;
		while(window.pollEvent(event))
			if(event.type == sf::Event::Closed)
				window.close();

		window.clear();

		if(choice == 1)
		{
			sprite.setPosition(static_cast<sf::Vector2f>(sf::Mouse::getPosition(window)));
			window.setView(fixed);
			window.draw(sprite);
		}

		window.display();
    }
    return EXIT_SUCCESS;
}

A *.zip file containing both, the code for StandardCursor class and the demonstration and an cursor image, can be obtained through this link: cursor.zip (2.9 KB)