Canvas Hit Region Detection

Posted by Kristian Joos on October 3, 2021

For those of you who came here looking for the next entry in the VS Code productivity series, fear not, the series will continue and the next entry will be available next week.

One of the projects I am currently working on is a platform for learning/playing chess. While I have many planned features for the platform, including personal match history (including a move by move summary), the ability to replay matches from a certain gamestate, non-local multiplayer functionality, “AI”, and more, the first thing that needs to be functional is the ability to actually play a game of chess. This blog details a challenge I encountered while developing the actual chess game portion of the platform, and how I engineered a solution to it.

For the chess game, I am using the Canvas API, specifically the CanvasRenderingContext2D. If you are unfamiliar with these, I would suggest reading up on them briefly before continuing, so that you can better understand the problem and solution that follows.

My chess game makes use of the canvas element to create the gameboard, which is ultimately an 8x8 grid. Once the board has been rendered, and the 32 chess pieces rendered on top of it, the next step is to give the player the ability to interact with the pieces so that they can input moves and progress the game. My intended control scheme for the game is to use the mouse to click on the piece you wish to move, then click on the square you wish to move to. As soon as I began to implement this, I realized the problem - I could only add an EventListener for “click” events to the entire canvas element.

I began brainstorming and researching potential solutions. The first one I came across was the addHitRegion API, which would ultimately accomplish my goal of splitting the canvas into an 8x8 grid, and handling the “click” events accordingly. Problem solved? No! Because the addHitRegion API is an experimental technology still, and as such is not supported by default in most browsers. I did not want to have to require my players to enable experimental features in their options, and that was only available via the Chrome and Firefox browsers.

My next potential solution was to re-think my control scheme. I considered a few different options. The first was to highlight the current square with a border, allowing the arrow keys to move the square highlight around. The player would then select the piece they wish to move with another key (likely space, or enter), then a new square highlight would render, the user would again navigate the highlight to the desired destination, and select the square via the spare or enter key again. The other control scheme I considered was to use input fields. The chess board’s squares are referred to via a letter (a-h)/number (1-8) combination, where the letter denotes the column and the number denotes the row. Ultimately there would be inputs for the piece to move, and the desired destination. While both of these control schemes would be viable, they did not offer the ease of use that I wished to offer my users, nor would they be compatible with timed or speed chess modes of gameplay, which are common ways to play the game and features I wish to implement.

While messing around with the actual “click” event in the debugger, I noticed that the event had some X and Y properties, which upon further testing revealed themselves to be the X and Y pixel coordinates of the click! Some additional reading of documentation led me to the clientX and clientY properties, which ultimately would better serve my needs, as the clientX and clientY were in relation to the user’s current viewport, instead of the absolute X and Y of the page. Thus if the user were to click the leftmost pixel of the page, the clientX would always be 0, regardless of if the page was currently scrolled horizontally. By taking the clientX and clientY of the “click” event, and subtracting the x position of the bounding rectangle, in this case the canvas x position (found via getBoundignClientRect, I can determine the coordinates within the gameboard that the player is clicking. Knowing the dimensions of the gameboard (the canvas element), I can easily determine which square within the 8x8 grid the player is attempting to interact with, and thus handle the event accordingly by moving the desired piece to the intended square, provided the move is valid.

This was a fun challenge to encounter, and I enjoyed the process of engineering a valid solution. Not only was I able to learn more about the Canvas API and Mouse Events, but I also wound up designing two alternate control schemes. While I won’t ultimately wind up using those schemes as the default for the application, I plan to implement them as potential alternate control schemes for potential accessibility options. They may not be ideal control schemes for my intended product, but they could potentially allow players who cannot interact via a mouse for either hardware or personal reasons a way to use the product. I look forward to additional engineering challenges that my chess platform may present to me along the way.