soft

To log or not to log- Javascript console game

2024-05-19

soft

Logging data or messages into the console is probably the most common line of debugging. But did you know that beside basic console.log() there are a whole bunch of other methods that can structure your data, simplify tracing errors, and even time the execution? If not, this post is for you, so you can play the console game more effectively with these hacks.

Image by mabelizt on Freepik

The basics

As we stated before you can use console.log() for anything. It can be some message, it can be data from an API call, you can showcase dataflow within your application, or even message yourself in the future by logging some fixes or enhancements. Let’s take a look at that.

classic console log

Nicer UI for warnings and errors

Although you can use classic console.log() for warnings and errors, it’s always nice to add some graphic implementation for logging these. The implementation of the UI depends on the browser, but usually errors have some red signs, and warnings yellow ones. Here is how it will look in Google Chrome

logs for warning and error

Orderly data with table

This one is extremely helpful when dealing with objects or nested arrays. It improves readability and helps you imagine how your matrix would really look like and how to debug. Let’s take a look at both scenarios:

tables based on object and matrix accordingly

Dealing with performance

Timing execution time can be helpful when trying to find bottlenecks in your application’s performance. And Javascript has you cover here too with console.time(), console.timeEnd() and console.logTime(). Just start the timer before function call, and mark the end after.

logging execution time

You can always use timestamps throughout your application with console.timeLog()

logging execution time with timelog

Measuring calls’ count

In scope of performance, you have to remember that not only the execution time is crucial but also count of the execution calls. Of course, there is a method to deal with that too:

counting executions with console.count

Summary

We discussed more advanced methods of using the JavaScript console beyond the basic `console.log()`and got familiar with several useful techniques for debugging and performance monitoring:

  • **Console Methods**: Apart from `console.log()`, methods like `console.warn()` and `console.error()` can visually differentiate warnings and errors with color-coded messages.
  • **Structured Data**: `console.table()` helps display objects or arrays in a readable tabular format, enhancing data visualization and debugging.
  • **Performance Timing**: Using `console.time()`, `console.timeEnd()`, and `console.timeLog()`, developers can measure the execution time of code blocks to identify performance bottlenecks.
  • **Execution Counts**: `console.count()` tracks the number of times a specific piece of code is executed, which is useful for performance analysis.
These techniques aim to enhance debugging efficiency and provide a clearer understanding of code performance.

Promises, promises- Javascript’s way to be your trustworthy friend

2024-03-15

soft

In the world of unreliable people and things, JS always has your back. JS will never ghost you leaving unresolved issues. It will always address your problems with fulfilling answer or rejection for a good reason. It can make you a king or queen of the asynchronous world, where your subjects can race, settle, or all resolve. Today we are gonna dive into what promises syntax looks like, how they resolved the callback hell issue, and when you will probably need them.

Introduction

Promises were proposed to become a part of JS ecosystem in The States and Fates document. They were an answer to a popular problem of nested callbacks, named callback hell or pyramid of doom. How big was the problem, you ask. Well, pretty big:

callback hell example

Try to unwrap this crazy burrito code! That’s why promises were introduced with the release of ECMAScript 6 (ES6) in June 2015. In short terms: they are an object that describes the eventual completion and its resulting value that we couldn’t predict when we created the promise, or failure and its reason, of an asynchronous operation. So how does it work?

Three states of your best friend

By default, a promise can only be in three states, and only one at a time. The initial state of the promise is pending. After that, there are two scenarios: either it is fulfilled with value, or rejected for reason. When it’s fulfilled or rejected, it is said that this promise is settled, which means: do whatever you wanna do with what the promise returned. I think this little chart can be helpful to understand and remember those states:

chart of stages of promises

Syntax

Basic syntax is pretty easy. We are invoking the promise constructor and give it a callback with two parameters: resolve and reject. In this scenario, we have to simulate async action, so I used timeout for 1s. We are giving strict instructions on how to proceed in the case of success- resolve with our greetings variable, and failure- reject with an error message.

basic syntax of promise

According to the previous diagram, now we have to deal with the outcome by chaining our promise with .then method if it was successful, or catching the error in case of failure. There is one other possibility: we can use finally method for necessary actions whether it was successful or not. We can also chain those methods as many times as we want.

dealing with result of a promise

Concurrency

You are probably wondering: what if there is more than one of them? How can I assure and predict their behavior? Well, there are four static methods to deal with this problem shown in below diagram:

behavior of promises depending on used method

What does it mean? Using .any method you are saying: if any one of them fulfills-> this is a success, but then all of them must fail, to call it a failure.

The actual opposite to this is Promise.all(), if any one of them fails -> it’s a failure, but they all need to succeed to call it a win.

We also have a ‘happy path’, when we need only one win or only one failure to call it with Promise.race().

And finally we have the most restricted method .allSettled, which means that all of the promises must succeed or fail to call a day. Below, you will find an example of usage with Promise.all:

example of promise.all method usage

Example usage

Promises are used every day by thousands of developers for

  • HTTP requests handling
  • file handling
  • timeouts and delays
  • database operations
  • event handling
  • parallel execution

I want to show the basic example of http call to a free REST API named Dog API. First of all, we are instantiating a new promise object with a callback that gets two parameters: resolve and reject, exactly as in the syntax example. After that, we use a fetch fn and give it an endpoint URL found in API documentation as an argument. The function itself returns a Promise that resolves to the Response object representing the response to the request. We can then use the then() method on this Promise to handle the response asynchronously. We are making sure, that the response status is ok. Why? Unfortunately, only network errors or other issues preventing the request from completing will cause the fetch Promise to be rejected. Response.ok method is checking if the status code of the response is between 200 and 299. If the response is not ok we can narrow down the cause of not getting the proper data (like 404 not found or 500 server down) and raise an error. If you want to dive deeper into the HTTP errors topic, here is a fun way to learn some: http cats. Now, the json() method of the request interface reads the request body and returns it as a promise that resolves with the result of parsing the body text as JSON (yes, json method is not returning a JSON, it’s returning an object). And now, finally, we can resolve the outer promise with that object, or reject it if the error(network or raised by us http status error) occurs.

example of usage promises while handling HTTP requests

At the end, we are just taking the result of our dogPromise and logging the keys of the message object, and in case of error, catching and logging it as well. I strongly suggest running every step of this independently, to fully understand how we are dealing with 3 promises under the hood and what are their results.

dealing with result of dogPromise

Summary

In conclusion, understanding promises is fundamental for any JavaScript developer, especially for beginners diving into asynchronous programming. It's not just about knowing how to use them; it's about grasping their underlying concepts and principles.

Promises offer a powerful abstraction for handling asynchronous operations, providing a cleaner and more readable alternative to callbacks. With promises, developers can write code that is easier to reason about, maintain, and debug. They enable more structured error handling and make it possible to compose asynchronous operations clearly and concisely.

Moreover, a deep understanding of promises is crucial for resolving common issues such as race conditions, where multiple asynchronous operations compete to resolve first. By mastering promise chaining, developers can effectively manage concurrency and ensure that their applications behave as expected. Link for official documentation is here

codegroove is ON! Bugs' final curtain call!

2024-03-06

soft

Have you ever written a VisualStudio Code extension? Do you want to? Or you are just curious how Microsoft can handle so many extensions in VSCode ecosystem without breaking the main product. Or maybe, you felt inspired to contribute to some open-source project, and you are already on the hunt for something interesting. If the answer to any of these questions is ‘YES’, I am ready to share my experiences with developing codegroove- VSCode extension for tracking and analyzing coding time.

The source code for codegroove can be found here

The motivation

There are people on this planet who have a problem with estimating time spent on activities. If you also suffer from that, you probably know that this week you again won’t meet your weekly goals, you will be stressing out about deadlines, you will get paid too less for the project that you underestimated. There is no other way to become better at this than tracking time spent and analyzing it. That’s why I was looking for a solution that would do it for me. The problem was: most of them needed some sort of token from third party website, the metrics were saved in some database and there was no mention what exactly is saved there, they would stop tracking the moment you stop typing, which for coding is not the optimal solution, or even worse- you should manually stop them, and last but not least- charts were displayed on some website that you purposefully should visit. To answer all these obstacles, I created codegroove.

How is it different from other solutions?

  • There are no tokens- you just install it, and it works
  • Your data is not going anywhere, it’s saved locally on your computer in a csv file
  • You don’t need to go anywhere outside you code editor to see your analytics, go to Show and Run commands (Ctrl + Shift + P) and type ‘show stats’, it will open in another tab
  • Only data that you need: your daily/ monthly/ yearly stats per projects and languages
  • I know developing is not always coding, so you have 15 minutes of inactivity time, after that the session will be saved, you can open new session by any activity in VSCode

The process

The tools

In order to write and publish any extension, you will need:

  • Node as a runtime
  • Git for easy version control
  • Yeoman and generator-code for creating an extension package
  • vsce for publishing

Extension structure

The main file that will be responsible to run whatever your extension is doing is extension.ts (or js if you are writing in vanilla js) with two functions: activate and deactivate. In codegroove, the activate function is instantiating and initiating my main CodeTimer and File Operator classes. It’s also registering command to instantiate StatsGenerator class that would read the data from file and create webview panel with some charts.

’activate

The second part of extension magic is happening in package.json, where you provide information about the publisher and extension itself, and about your commands.

’package.json

The rest of the project is totally up to you. As you can see, I am actually using only 3 classes: CodeTimer that deals with tracking your coding sessions based on VSCode events, and displays current session elapsed time, FileOperator which deals with creating directories and files, writing, appending and reading those files, and StatsGenerator which creates webview panel, and is applying script that generates charts and styles to it. Sounds simple, doesn’t it?

Was it really that simple? The obstacles.

It was a symphony of bugs, actually! And a lot of fun 🙂 Although, the mechanics of the extension are pretty simple and the VSCode API documentation is very helpful- it’s context and workspace is a different world than what you usually encounter.

  • You have to navigate through events you see for the first time and understand how they work in order to not open a new session on every keystroke e.g.
  • Second of all- every time you open a new project the previous state is gone, status bar is gone, and a new context opens, so dealing with state is your primary concern in this scenario.
  • You should also remember that every user can have VSCode installed in a different folder, so that would mean navigating through VSCode workspace in order to save and read files.
  • There was also a consideration of saving data in a json file, as it is very easy to read from, but considering that there will be lots of data to store, and I don’t really want to read data before storing anything, I chose to use csv. Csv-parser is only reading the file, when you want to see the stats, and plain text takes up less storage, so it seemed like a more efficient way.
  • Creating webview panel is one thing, but adding styles and script to it and ensuring proper data flow is another.
  • Working with chart.js for the first time was fun and challenging.
  • The process ended with some issues around publishing, but good old Stackoverflow came to the rescue. It seems that Microsoft User Id number is not your publisher id that vsce needs to register your token.
  • Last but not least, there were some problems with files in a production that were invisible in development mode. I would suggest before publishing to package your extension and install it locally from vsix file in order to check if everything is working as intended

Continued development

The extension is an open-source project, and everyone is welcomed to contribute to it. In the nearest future I will add issues that need some help or are good for first timers.

I would definitely want to enhance the UI of charts. I would also want to add a dropdown menu for choosing your inactivity time before saving the session, and choosing if you want to see time elapsed in session or day/ month/ year. The second part is adding completely new feature which is some music player integration.

Why the whole ecosystem is not crashing?

As an extension developer you actually don’t mess with the VSCode source code in any way. The team is giving you API and some guides on how to make your extension more performant and accessible. That’s why I strongly encourage you to try making one on your own. You can’t break anything, you don’t really need to know JavaScript, because it can be a theme extension. Very informative guide on how to reason about building it can be found in VSCode Youtube channel

Summary

This was a great learning experience. I was able to build something that is actually not only useful for me, but probably for many other developers. Check myself against the new environment of VSCode API and new libraries. I am excited to maintain this project in the future and continue to give back to the community this way. I highly encourage you to check out codegroove as it is available in VSCode Marketplace, leave a star in my repository, contribute if you will, and stay tuned for the next one!

Shooting for simplicity with JS arrow functions

2024-03-05

soft

We know functions are building blocks of any program. We can use our beloved concise syntax of arrow function expressions since 2015, but as with everything we should know and understand their superpowers and limitations. In this post, we will delve into their caveats and nuances.

The syntax

Arrow functions are always anonymous, so the basic syntax consist a pair of parenthesis, an arrow and expression.

You can also use rest parameters, default parameters, and destructuring. In these cases parenthesis for params are required.

basic arrow function syntax using rest parameters, default parameters and destruturing

It’s also working with asynchronous code

example of using asynchronous code

You can also assign them to a variable and give it a name to make them more reusable

naming arrow function

The body, as you probably noticed, can be either an expression body, where you can omit curly braces and an explicit return statement (this way it will always return an expression, and only one expression can be specified). Or can be a usual block body, where you can put multiple statements in curly braces. This way the arrow function by default returns undefined, and you have to explicitly provide the return value.

explicit and implicit return statements

You are probably wondering, what if I want to use the expression body, as more concise, but I want to return an object. Well, you won’t get the result you want, instead your function will return undefined. Why? Because JS can only understand the expression body if there is no left brace after the arrow. We can simply fix this by wrapping our object literal in parentheses.

returning object literal

As you can see the syntax of arrow functions is super compact and readable. Especially with body expressions, you can omit some boilerplate for traditional function expressions, and save some development time.

What about bindings?

Here comes the first limitation of arrow functions. They don’t have their bindings for ‘this’, ‘arguments’, or ‘super’, so you really shouldn’t be using them as methods.

using arrow functions as object literal methods

Problems with ‘this’

You can feel temptation, however, to use them in classes. The class’s body has its own ‘this’ context, so using the arrow function as class field would correctly point to an instance of the class, or class itself (if used as static fields), because it is a closure for that field.

using arrow function as a class field

But there are two caveats of using ‘auto-bound methods’ as arrow functions are named in this context. First of all, because it’s a closure and not binding of the method itself, the value of ‘this’ won’t change based on the execution context. This can lead to some unexpected behaviors in your program. Second of all, you have to remember that fields are defined on the instance of the class, not on its prototype. So every time you create an object based on such a class, the new function reference is also created and gets new closure, potentially leading to memory leaks.Here is a proper way of dealing with the 'this' keyword in traditional functions used as class fields:

binding traditional function to classes scope

Problems with ‘arguments’

Arrow functions do not have their own ‘arguments’ object. However, they will take the arguments of the outer scope if called for.

arrow function getting arguments from outer scope

We can use rest parameters tho as an alternative to arguments object, and it works as expected.

using rest params as arguments

What about using as constructor?

Arrow functions will throw a TypeError when called with the ‘new’ keyword, so there is no point in using them as constructors.

arrow function throwing an error while used as constructor

What about using it as a generator function?

Arrow functions will throw a SyntaxError if you try to use the ‘yield’ keyword in their body. The only way you can do that is to nest a traditional generator function inside the arrow function.

arrow function throwing an error with

Proper generator function

generator function working correctly

Where those arrows shine?

Shorter syntax for anonymous functions

As we stated before, especially with expression body, arrow functions are very simple and readable alternatives to traditional anonymous functions. Having an implicit return makes this solution even cleaner.

Dealing with array methods

Arrow functions provide a very concise way of writing array methods, enclosing them in one line of readable code:

using arrow functions as callbacks for array methods

Dealing with callbacks for event listeners

Some of their characteristics can be both pros and cons. While using them as class fields could potentially lead to bugs when dealing with state, they are a great solution as a callback for event listeners, as their ‘this’ context comes from the outer scope of the class.

arrow function as a callback for listener

Summary

I hope I could provide some insights into when and where you can leverage arrow functions effectively, and where the caution is needed. They can be a way to write concise, readable, reusable code. Having their limitations in mind will help you avoid errors and frustration in the future.

How open is open-source

2024-03-04

soft

You've probably heard about open-source software before. But do you know where is this idea coming from? Why is it such a positive buzzword? Who and how actually can contribute to open-source projects and where to find them? Today I will try to unveil some of these secrets for you.

A little bit of history

In the early days of computing, 60's and 70', software was usually bundled with hardware which gave users the ability to modify it. The rise of proprietary software models restricted access to source code, limiting this freedom. Of course, for money.

Father of open-source

The idea is often attributed to Richard Stallman. He advocated for software freedom, emphasizing the importance of users having the rights to view, modify, and distribute source code, which was introduced in 1985 as the GNU General Public License (GPL). Soon after that, he founded the Free Software Foundation, and the license ensured that software released under it remained free and open.

The rise of Linux

In 1991, using the GNU's development tools, Linus Torvalds produced the free monolithic Linux kernel and released it under GPL. The combination of the Linux kernel with GNU software led to the development of the Linux operating system. Later on, he will create a distributed version control system- Git.

Golden era of the 90's

90's are the beginning of the web development boom. Netscape Navigator's source code release marked a shift, and the Open Source Initiative (OSI) was founded to promote open-source principles. From now on the term gained popularity. Companies started adopting open-source as a viable development model. Projects like Apache, MySQL, and PHP will soon become foundational components of web development.

The influence

Open-source projects play a crucial role in shaping the software development industry by promoting collaboration, knowledge sharing, rapid evolution, and innovation. The principles of openness and community-driven development have become integral to the modern software development landscape. Contributing to open source provides developers with valuable experience. They can enhance their skills, build a portfolio, and collaborate with experienced developers, contributing to their professional growth. Frameworks, libraries, and tools built on open-source principles become building blocks for a wide range of applications nowadays.

Is it really open?

The openness of open-source software is characterized by transparency, collaboration, and community-driven development. However, the term "open" should be understood in the context of access to the source code and the freedoms granted to users, rather than implying absolute openness in all aspects. Other aspects of a project, such as decision-making processes, governance, or documentation, might vary and are not implied by the term "open source" alone. While it provides freedoms, it is governed by licenses. Users must comply with the terms of the chosen license, which can vary (e.g., GPL, MIT, Apache). You can read about licensing software on GitHub licensing page

Where to start?

What is funny, you don't need to code if you don't feel ready yet. You can contribute as a technical writer, content creator, tester, etc. What you will need is basic knowledge of Git and Github. I can recommend at least the first few lessons from learn git branching. The next step would be to choose the right project. Here comes the issues labeling system on GitHub to the rescue. Issues are tasks to be done in a certain project. They come with labels like 'bug', 'help-wanted', and the one we are looking for right now: 'good-first-issue'. Where to find them? You can browse GitHub on your own, but for starters, I would suggest using this page: good first issue. What is the next step? Read CONTRIBUTING.md, README.md, and code of conduct files. There is everything you need to know about the project, conventions, naming, linting, and making pull requests. After that, the only thing left is to fork a repo, and start making your first changes. If you still feel lost, check this guideout.

Summary

I hope I succeeded in outlining how contributing to open source can benefit your growth by enhancing your skills and providing valuable experience. It is a way to give back to the community and gain some acknowledgment and experience. It is, after all, the foundation of modern software development and the source of ideas for collaboration, innovation, and evolution.

API- Aspiring Programmer Incubus?

2024-03-01

soft

Have you ever been wondering what could be the most frequently asked question in software interviews? It is probably the hero of today's post- API. If you don't want it to be your incubus shortly, you should at least have a grasp of what an API is, what it is used for and why, and talk about some of its forms.

Image by storyset on Freepik.

What the heck is it?

An API, or Application Programming Interface, is a set of defined rules and protocols that allows one software application to interact with another. It serves as a bridge that enables different software systems to communicate and share data or functionality seamlessly. APIs are crucial in modern software development as they facilitate the integration of diverse applications and services, fostering interoperability and collaboration. OK, but what does it mean? In a very simplistic way- thanks to API two interfaces can talk to each other.

Why there are different forms of this thing?

People can communicate in various forms, why wouldn't software mimic this? Especially, as we all well know, each of these forms has its pros and cons. You can text someone, it will be quick, but the message should be concise and the receiver won't be able to know your body language. You can video call someone, but you have to take into consideration that a bad connection would make it very difficult to exchange thoughts. You can also send a letter with a very well-thought-out message, but it will be the slowest of all methods and received data can be obsolete by then. And of course, it makes it a great interview question on every level.

How many forms are there?

Well, there is more to it than just the REST API you probably are familiar with. Here is a simplified picture of different forms, and a few words of explanation:

different forms of API
  • Web APIs, also known as RESTful APIs, are widely used for communication over the internet. They follow the principles of Representational State Transfer (REST), resources are accessible via endpoints by HTTP/HTTPS methods like GET, POST, PUT, and DELETE.
  • SOAP (Simple Object Access Protocol) is a protocol used for communication between web services. SOAP APIs are known for their strict standards and often use XML for message formatting. They can operate over various protocols, including HTTP, SMTP, and more. SOAP is often used in enterprise environments or legacy systems, and while it includes advanced security features, it can be slower than other API architectures.
  • GraphQL is an open-source query language that enables clients to interact with a single API endpoint to retrieve the exact data they need. This approach reduces the interaction time and is great when you deal with unreliable network connections or slow systems.
  • Webhooks are used to implement event-driven architectures, in which requests are automatically sent in response to event-based triggers. For instance, you are paying in an e-commerce store, and this event is sending some payload with request data, triggering the server to send some preconfigured response.
  • RPC stands for Remote Procedure Call, and gRPC APIs were originated by Google. In gRPC architectures distributed systems talk to each other as if they were local objects.
  • Hardware APIs provide a standard interface for interacting with hardware devices, such as printers, cameras, or sensors. They allow software applications to communicate with and control hardware components.

Why do we use it?

As you already know in software everything is about data. We are collecting data, sending data, analyzing data, etc. APIs enable different software systems to work together, promoting interoperability. Applications and services can communicate seamlessly, regardless of the technologies or platforms they are built on.

APIs allow developers to build modular applications by exposing specific functionalities or data. This modularity promotes code reuse and makes it easier to update or replace individual components without affecting the entire system.

APIs also empower developers to integrate third-party services or features into their applications. This is commonly seen in social media logins, payment gateways, mapping services, and more.

By using APIs, developers can leverage existing functionalities without reinventing the wheel. This leads to increased efficiency, shorter development cycles, and faster time-to-market for applications.

APIs facilitate scalability by allowing components of a system to scale independently. If one part of the application experiences increased demand, developers can scale that specific API or service without affecting the entire system.

APIs provide a controlled and secure way to expose specific functionalities or data. Access to APIs can be authenticated and authorized, ensuring that only authorized users or applications can make requests.

Summary

Modern software development probably wouldn't be possible without APIs. They enable easy, secure, scalable, and structured communication between interfaces. They are saving development time and ease our lives. I hope this post gave you an overview of this engaging topic, and soon we can delve deeper into the REST APIs world for a start.