Skip to main content
Bluecoders

Resources

All the tech vocabulary you need.

Languages, frameworks, AI, methodologies, roles: 172 essential terms to speak tech with confidence in 2026.

172 terms

  • A/B Testing

    MethodologyTerm

    A/B testing is an experimentation method that compares two versions (A and B) of the same item — web page, feature, email, price — to measure which performs better against a given goal (conversion rate, clicks, retention).

    Users are randomly assigned to one of the two variants, and statistical analysis tells you whether the observed difference is significant or down to chance. It is a cornerstone of data-driven product and growth marketing.

    Tools like GrowthBook, LaunchDarkly, Optimizely or Statsig run these experiments at scale.

  • Agile

    Methodology

    Agile is a project management approach that breaks projects down into a series of small, achievable goals. It was shaped in the early 2000s by 17 American engineers who found the traditional management methods of the time too heavy, slow and constraining.

    When working in Agile mode, teams operate in short cycles known as sprints or iterations, usually lasting between one week and one month. This is a far cry from traditional approaches like Gantt charts or waterfall plans that lock projects in for 12 or 24 months.

    There are several ways to apply the Agile methodology. You can use Kanban, Scrum or even Extreme Programming (XP). All these methods build on the foundations of the Agile Manifesto.

  • AI / IA

    Concept

    The term « artificial intelligence », coined by John McCarthy, is often shortened to « AI » (or « IA » in French, for intelligence artificielle).
    It is defined by one of its creators, Marvin Lee Minsky, as « the construction of computer programs that engage in tasks which, for now, are performed more satisfactorily by human beings because they require high-level mental processes such as perceptual learning, memory organisation and critical reasoning ».

    It brings together an « artificial » side, achieved through computers or sophisticated electronic processes, and an « intelligence » side, tied to its goal of imitating behaviour.

    This imitation can apply to reasoning — for example, in games or in mathematics — to natural-language understanding, to perception (visual interpretation of images and scenes, auditory understanding of speech, or input from other sensors), or to controlling a robot in an unknown or hostile environment.

  • AI Act

    TermConcept

    The AI Act (the European regulation on artificial intelligence, adopted in 2024) is the world's first horizontal legal framework governing the development and use of AI systems. It applies to any AI placed on the market or used in the European Union, regardless of where the provider is based.

    It classifies AI systems based on a risk approach: prohibited (social scoring, cognitive manipulation), high-risk (HR, education, justice, critical infrastructure), limited-risk (chatbots) and minimal-risk. General-purpose AI models (GPAI) such as LLMs have their own transparency and risk-management obligations.

    Enforcement rolls out gradually from 2025 to 2027, with fines reaching up to 7% of global turnover.

  • AI Agent

    ConceptTech

    An AI agent is an autonomous program that combines a language model (LLM) with tools (web search, code execution, API access, file manipulation…) to accomplish complex multi-step tasks without human intervention at every step.

    Unlike a simple chatbot that answers a question, an agent plans, executes, observes the result and adapts its strategy. People talk about agentic AI to describe this new generation of AI applications.

    Examples: Claude Code, GitHub Copilot Workspace, Devin, Cursor Agent, Manus. The MCP protocol (Model Context Protocol) standardises the connection between agents and tools.

  • AI Engineer

    Role

    The AI Engineer is a tech profile that emerged with the rise of generative AI. They design and deploy applications that use LLMs, vision models or autonomous agents.

    Unlike a Machine Learning Engineer, who trains models, the AI Engineer mostly works with pre-trained models (Claude, GPT, Mistral, Llama…) and assembles them using techniques such as RAG, prompt engineering, agent orchestration, evaluation and AI observability.

    They typically work in Python or TypeScript, with frameworks like LangChain, LlamaIndex or the Anthropic / OpenAI SDKs, and with vector databases (Pinecone, Qdrant, Weaviate). It is one of the most sought-after roles in 2026.

  • Airflow

    TechTool

    Apache Airflow is an open-source platform for orchestrating data workflows, created at Airbnb in 2014 and later handed over to the Apache Foundation.

    It lets you define pipelines (extraction, transformation, loading, ML, reporting…) as DAGs (Directed Acyclic Graphs) in Python, then schedule, execute, monitor and manage their dependencies and error recovery.

    It is one of the references for modern data engineering, competing with Dagster, Prefect, Argo Workflows and managed services (MWAA on AWS, Cloud Composer on GCP).

  • Angular

    FrameworkTech

    Angular (also called « Angular 2+ » or « Angular v2 and above ») is a TypeScript-based client-side framework, co-led by Google's Angular team and a community of individuals and companies.

    Angular is a complete rewrite of AngularJS, an earlier framework built by the same team. It is used to create web applications, and in particular single-page applications: apps served from a single page that streamline the user experience and avoid full reloads on every action.

    The framework follows an MVC-style architecture that separates data, view and actions for cleaner responsibility management. A proven approach that supports high maintainability and effective collaborative work.

  • API

    TermConcept

    An API (Application Programming Interface) is a software interface that lets one piece of software or service « connect » to another in order to exchange data and features.

    APIs unlock many possibilities: data portability, email marketing campaigns, affiliate programmes, embedding features from one site into another, or open data. They can be free or paid. Source: cnil.fr

    Example: when you use an app on your phone, the app connects to the internet and sends data to a server. The server fetches that data, interprets it, performs the required actions and sends data back to your phone. The app then interprets the response and displays the requested information in a readable form. That is exactly what an API is for: the whole process goes through it.

  • Astro

    FrameworkTech

    Astro is an open-source content-oriented web framework that, by default, ships static HTML pages with no JavaScript sent to the browser (« zero JS »), delivering top performance for editorial, marketing and e-commerce sites.

    Its main innovation is the islands architecture: each interactive component is isolated and hydrated independently, so you can drop reactivity (React, Vue, Svelte, Solid…) only where you actually need it, without paying the cost of a full SPA.

    Astro is used by the websites of Firebase, The Guardian, Cloudflare and Netlify Docs, among others.

  • ATS

    Tool

    An ATS (Applicant Tracking System) is a software platform used by recruiters to manage and automate their hiring process end to end: job posting, CV collection and screening, candidate tracking at every stage, communication with hiring teams and reporting.

    ATSs have become essential for companies and recruitment agencies because they centralise candidate data, automate repetitive tasks (follow-ups, scoring, CV parsing) and provide an overview of the recruitment pipeline. Many now include AI-powered features for matching or sourcing.

    Examples of well-known ATSs:
    - Marvin Recruiter
    - Greenhouse
    - Lever
    - Workable
    - Teamtailor

  • Back-end

    TermConcept

    Back-end is a web development domain — the coded part of an application that is invisible to the user: the server side. All the computing logic that responds to requests sent through the interface lives in the back-end.

    It covers database management, server-to-server communication, API management, accessibility, security and so on. It is often compared to the submerged part of the iceberg, since it represents the bulk of an application's code.

    Examples of back-end languages:
    - Python
    - PHP
    - Ruby
    - Java

  • Blockchain

    TechConcept

    Blockchain is a modern technology for storing and transmitting information. It works without a central control body, yet provides transparency and security thanks to transaction validation by the network.
    A blockchain is essentially a database that holds the history of every exchange made since its creation. Because it is shared among all its users with no intermediary, anyone can verify its validity and confirm its integrity.
    Source: futura-sciences.com

  • Bun

    TechTool

    Bun is a modern JavaScript and TypeScript runtime, written in Zig, that positions itself as an alternative to Node.js and Deno with a focus on performance.

    It bundles a runtime, a bundler, a test runner and an npm-compatible package manager into a single tool. Bun runs most Node.js code unmodified and starts an HTTP server several times faster.

    Created by Jarred Sumner and released as version 1.0 in September 2023, Bun has become a credible option for tooling and edge computing in 2025.

  • Chaos Engineering

    MethodologyConcept

    Chaos engineering is a discipline that consists of deliberately causing failures in production (or in a realistic pre-production environment) to test the resilience of a distributed system and uncover weaknesses before they cause a user-facing incident.

    The approach was popularised by Netflix with Chaos Monkey, a tool that randomly shuts down production instances. The practice spread thanks to platforms like Gremlin, AWS Fault Injection Simulator and LitmusChaos.

    It is a natural complement to SRE culture and observability: you don't really know whether a system is resilient until you've tested it.

  • CI / CD

    ConceptMethodology

    CI/CD (or CI/CD pipeline) stands for Continuous Integration / Continuous Delivery (or Deployment).

    CI/CD is an approach that speeds up the path from code to production by automating every step of development and deployment.

    CI applies to development teams and — as the name implies — continuously integrates the code they produce. Each small piece of the application is tested and merged into the existing code.

    CD applies to operations teams responsible for deploying the code produced upstream. The terms Continuous Delivery and Continuous Deployment are related. Continuous Delivery automates the handover of code from one team to the next, improving visibility and communication between teams that historically struggled to talk to each other (see DevOps).

    Continuous deployment automates the deployment itself — pushing the integrated code (from CI) to the production environment used by end users. Delivery and Deployment — very close concepts, sometimes confused — make up the second half of an application's development cycle.

    In a nutshell, CI/CD automates the development, delivery and deployment stages. The DevOps engineer is usually the person who sets up the CI/CD. Because of its end-to-end automation, the CI/CD approach pairs naturally with Agile and accelerates the evolution and time-to-market of an application.

    It powers a company's ability to ship new features quickly and consistently and to react to customer feedback.

  • Claude

    TechConcept

    Claude is the family of large language models (LLMs) developed by Anthropic, founded in 2021 by former OpenAI researchers focused on AI safety.

    Claude models (Haiku, Sonnet, Opus) are known for their very large context window (up to several million tokens), their strength on code and reasoning, and their training based on Constitutional AI, a variant of RLHF that bakes explicit principles into the model's learning.

    Claude is accessible via claude.ai, the Anthropic API and the major clouds (AWS Bedrock, Google Vertex AI). Claude Code, Anthropic's CLI agent, has become a reference in AI-assisted development tools.

  • Container

    Term

    Containers are executable units of software that bundle application code together with its libraries and dependencies in a common way, so it can run anywhere — on a desktop, on a traditional IT system or in the cloud.
    Source: ibm.com

  • Context Window

    TermConcept

    The context window is the maximum amount of text a language model (LLM) can take into account at once, expressed in tokens.

    It includes the system prompt, the conversation history, any provided documents and the generated response. A larger window lets the model analyse longer documents or maintain a longer conversation without losing information.

    In 2026, frontier models reach several million tokens (Gemini, Claude). But a large window doesn't guarantee quality: phenomena such as lost in the middle and the quadratic complexity of attention remain open challenges.

  • Copilot

    TechTool

    A copilot is an AI-powered software assistant that suggests actions, content or code to a user in real time — without replacing them.

    The term was popularised by GitHub Copilot, launched in 2021, which suggests code in the IDE. Microsoft has since extended the brand across its entire product line (Microsoft 365 Copilot, GitHub Copilot Workspace, Copilot Studio…). Other players offer their own equivalents: Cursor, Windsurf, Tabnine, Codeium, Claude Code, JetBrains AI Assistant.

    Not to be confused with an autonomous agent: a copilot proposes, the human validates at each step.

  • CRM

    Tool

    CRM (Customer Relationship Management). More commonly, « CRM » refers to a CRM tool or system.

    A CRM is therefore a tool used to manage, streamline and improve a company's relationship with its customers: tracking the sales process, centralising contact and customer information (phone, email, contracts, etc.), after-sales support, and more. The goal of a CRM is to simplify the internal back-office of customer relationships so the external customer experience is better.

  • CSS

    Language

    CSS stands for « Cascading Style Sheets ». CSS is a computer language used to style web pages (HTML). The language is made up of those famous « cascading style sheets » — also called CSS files (.css) — which contain styling rules.
    Source: atinternet

  • CTO

    Role
  • Cybersecurity Analyst

    Role

    The Cybersecurity Analyst is in charge of detecting, analysing and responding to information security threats within a company.

    They continuously monitor systems via a SIEM (Security Information and Event Management), investigate alerts, qualify incidents and coordinate the response with IT and business teams. Depending on the organisation, they may sit in an internal or external SOC (Security Operations Center).

    It is a role under very strong demand in 2026, driven by the rise in cyberattacks and the rollout of regulations such as NIS2 or DORA.

  • DAO

    TermTech

    A DAO (Decentralised Autonomous Organisation) is a structure whose governance rules are coded into smart contracts on a blockchain, and whose decisions are taken collectively by the holders of governance tokens.

    DAOs often manage a shared treasury (in cryptocurrencies), with on-chain votes replacing a traditional board of directors. They are used to steer DeFi protocols (Uniswap, MakerDAO), open-source projects or investment collectives.

    Their legal status remains unclear in most jurisdictions, which is one of the main barriers to wider adoption.

  • Data

    Term

    Talked about endlessly since the dawn of the digital era, data is the new black gold — if not the new blue gold. Data is everywhere. We all generate data every day when we use a digital service: a WhatsApp message, a Google search, an online payment, a new video on a social network — even just visiting a web page without clicking.

    A piece of data is a recorded piece of information that can be retrieved and traced back. Whether it lives in the cloud or on a computer, it is stored on hardware such as the hard drive of your PC or the server of a datacenter on the other side of the world.

    Used mainly in the big-data world (the study of large data volumes), it is collected and stored in companies by Data Engineers so it can be used and valued by Data Scientists and Data Analysts.

  • Data Analyst

    Role
  • Data Contract

    TermMethodology

    A data contract is a formal, versioned agreement between a data producer (back-end team, source application) and its consumers (data engineers, analytics, ML) that describes the schema, semantics, SLAs and quality rules of the data being exchanged.

    The data contract documents fields, types, allowed values, update frequency, acceptable latency and policies for breaking changes. It serves as the source of truth for validating data at production and consumption time, putting an end to silent regressions that break analytics pipelines.

    It is a pillar of modern data architectures (data mesh in particular), often implemented with tools such as Soda, Great Expectations, Datacontract.com, or directly in the Kafka schema registry.

  • Data Engineer

    Role
  • Data Lakehouse

    TechTerm

    A data lakehouse is a data storage architecture that combines the flexibility and low cost of a data lake (raw files in an S3-style object store) with the transactional and analytical capabilities of a data warehouse (ACID, schemas, performant SQL queries).

    The lakehouse relies on open table formats such as Apache Iceberg, Delta Lake or Apache Hudi, which add a metadata layer on top of Parquet files to offer transactions, time travel and schema evolution.

    Reference platforms include Databricks (Delta Lake), Snowflake (Iceberg), AWS Athena and Iceberg-native deployments on Trino, DuckDB or ClickHouse. The lakehouse has become, in 2026, the dominant pattern for analytics at scale.

  • Data Mesh

    MethodologyTerm

    The data mesh is an organisational and technical approach to data, conceptualised by Zhamak Dehghani in 2019, which proposes to decentralise data ownership by business domain rather than concentrating it in a central data team.

    Four principles: domain ownership (each business team owns its data), data treated as a product (data products with SLAs, documentation and an owner), a shared self-service platform, and federated governance based on contracts.

    The data mesh is implemented through data contracts, a data catalogue (DataHub, OpenMetadata) and an internal data platform. Best suited to large multi-domain organisations; often overkill for smaller structures.

  • Data Scientist

    Role
  • Database / DB

    TechTerm

    A collection of data stored and organised so it can be manipulated and retrieved. Picture Excel spreadsheets with millions or even billions of rows.

    Example: the customer list of an e-commerce store with names, addresses and phone numbers. You talk to databases through queries. In web development, you frequently come across relational databases — queried in SQL — and non-relational databases known as « NoSQL ».

    Examples of SQL database managers:

    • MySQL
    • SQLite
    • PostgreSQL

    Examples of NoSQL database managers:
    - MongoDB
    - Cassandra

  • dbt

    TechTool

    dbt (data build tool) is an open-source framework that lets data analysts and data engineers transform data directly in the data warehouse using SQL, while applying software-engineering best practices: Git versioning, tests, documentation, modularity and CI/CD.

    A dbt project consists of models (versioned SQL files), tests (assertions on the data), sources (references to raw tables) and macros (Jinja). At runtime, dbt compiles everything into native SQL and executes it on Snowflake, BigQuery, Redshift, Databricks or Postgres.

    dbt has become the de-facto standard of the modern data stack, around which much of the analytics engineer role is organised.

  • DDD

    Methodology

    DDD (Domain-Driven Design) is a software design approach that represents the business domain directly in the code, rather than treating the business as an afterthought layer.

    DDD is a design philosophy: you start from the business in order to build the solution that serves it — i.e. your code. The structure, the names of classes and fields, and the actions of functions should all reflect the business language (this is known as the ubiquitous language).

    In an ideal world, a business person should almost be able to understand the intent of the code they are reading. Domain-Driven Design is, above all, the technical team's understanding of the business domain.

    It's more than a technique — it's a design heuristic: you look for intuition about the business code you need to produce.

    Source: alexsoyes.com

  • Deep Learning

    TechConcept

    Deep learning is one of the main technologies underpinning Machine Learning. With Deep Learning, we are talking about algorithms able to mimic the actions of the human brain through artificial neural networks. These networks consist of dozens or even hundreds of « layers » of neurons, each receiving and interpreting information from the previous layer.

    Source: datascientest

  • DeFi

    TermTech

    DeFi (Decentralised Finance) refers to the set of financial services — lending, borrowing, swapping, derivatives, insurance — operated through smart contracts on a public blockchain (mostly Ethereum and its Layer 2s), without going through a bank or a centralised intermediary.

    The main primitives of DeFi are DEXs (Uniswap, Curve), lending protocols (Aave, Compound), stablecoins (USDC, DAI) and yield aggregators.

    DeFi enjoyed spectacular growth between 2020 and 2022, followed by consolidation and increased regulation (notably the European MiCA regulation, in force since 2024).

  • Deno

    TechTool

    Deno is an open-source JavaScript and TypeScript runtime created by Ryan Dahl — the original author of Node.js — and unveiled in 2018 as a response to the design flaws he saw in Node.

    Deno is secure by default (a script must explicitly request network, filesystem or other access), supports TypeScript natively, uses Web standards (fetch, ESM, Web APIs) and ships with its own tooling (formatter, linter, test runner, bundler).

    Version 2 (2024) brought full npm compatibility and serverless deployment through Deno Deploy, making it a serious option for the edge and modern tooling.

  • Design pattern

    Term

    A design pattern is a typical arrangement of modules, recognised as a best practice in response to a recurring software design problem. It describes a standard, reusable solution for designing different pieces of software.

    Put differently, the goal of a design pattern is to standardise certain best practices, starting from a blueprint that is known to work for a given problem type. You don't reinvent the wheel: you save time and gain reliability by reusing a proven model.

    Source: Wikipedia

  • Developer Advocate / DevRel

    Role

    The Developer Advocate (or DevRel, for Developer Relations) is a hybrid profile sitting at the intersection of engineering, product and marketing. Their job is to represent developers internally and to represent the product to the developer community.

    They write technical content (docs, blog posts, samples), speak at conferences, run communities (Discord, Slack, GitHub), gather user feedback and bring it back to product and engineering.

    The role is central for API, SDK and developer-tooling companies (Stripe, Vercel, Anthropic, Resend, Supabase…), where adoption hinges heavily on the quality of the developer experience.

  • DevOps

    Role
  • Django

    FrameworkTech

    Django is an open-source web framework written in Python. Its goal is to make web application development simple and based on code reuse.

    Built in 2003 for the local newspaper of Lawrence (Kansas, USA), Django was released under the BSD licence on 1 July 2015. Since 1 June 2008, the Django Software Foundation has overseen its development and promotion.

    Alongside this ongoing work, conferences between developers and users — DjangoCon — have been held twice a year since 2008, one in Europe and one in the United States. Several mainstream sites are now built on Django, including Pinterest and Instagram (at least as of 2011) and Mozilla.

  • Docker

    Tech

    Docker is a free, open-source piece of software that automates application deployment. It was developed by Solomon Hykes at dotCloud and released in March 2013.

    It is a container-based virtualisation platform that lets you design, test and deploy applications quickly. With Docker, it is easy to deploy and scale applications in any environment while making sure the code runs automatically.

    Source: syloe.com

  • E-commerce

    Term

    E-commerce (or ecommerce) is a commercial practice that connects merchants and buyers over the internet. Goods and services are transacted via an online store, a mobile app or other sales channels such as social networks, price comparison sites, marketplaces, affiliate platforms and retargeting platforms.

    Its advantages:
    - The ability to reach a wide audience.
    - A detailed view of customer purchase behaviour thanks to a rich set of marketing and online analytics tools.
    - The ability to run highly targeted marketing campaigns and offer buyers a high-quality, unique online experience.
    - Unlike a physical store, your e-commerce site is always open, so you don't miss a sale.

  • Edge Computing

    TechConcept

    Edge computing is an architectural model where compute runs as close to the end user as possible — on geographically distributed points of presence (PoPs) — rather than in a single, distant cloud region.

    The goal is to reduce latency (code responds within tens of milliseconds even on the other side of the world) and to offload origin servers. It has become a standard for web rendering (Vercel Edge, Cloudflare Workers, Netlify Edge Functions, Deno Deploy), personalisation, authentication and request routing.

    Edge runtimes are typically more constrained than a full Node.js (no filesystem, limited APIs) because they run in V8 isolates rather than containers.

  • Embeddings

    TermConcept

    An embedding is a numerical representation (a vector of several hundred to a few thousand dimensions) of a piece of content — text, image, audio — computed by a machine-learning model in such a way that semantically similar contents produce nearby vectors in that space.

    Embeddings are the foundational building block of semantic search, clustering, classification — and most importantly RAG (Retrieval-Augmented Generation), where they allow you to retrieve passages from a corpus relevant to a question.

    They are stored and queried efficiently in vector databases (Pinecone, Qdrant, Weaviate, pgvector). The main embedding model providers in 2026 are OpenAI, Voyage, Cohere, Mistral and open-source models like BGE and E5.

  • Engineering Manager

    Role
  • FDD

    Methodology

    Feature-Driven Development (FDD) is an iterative and incremental software development process.
    It is one of the lightweight or Agile methods used to develop software. FDD blends a number of industry-recognised best practices into a coherent whole.
    These practices are all rooted in a customer-valued feature perspective. Its main goal is to deliver tangible, working software repeatedly and on time.

  • Feature Flag

    MethodologyTerm

    A feature flag (or feature toggle) is a mechanism for enabling or disabling a feature in a piece of software via runtime configuration, without deploying new code.

    Feature flags are used to: run progressive rollouts (canary release, rollout by cohort), run A/B tests, separate deployment from release, kill a buggy feature quickly (kill switch) or manage paid access.

    Reference platforms include LaunchDarkly, GrowthBook, Statsig, Unleash and Flagsmith. It is an essential tool of continuous delivery (CI/CD) and trunk-based development.

  • Feature Store

    TechTerm

    A feature store is a centralised service that stores, serves and versions the features used by machine-learning models — both for training (offline) and for production inference (online).

    It addresses two classic problems of ML in production: reusing features across projects (a single customer lifetime value calculation can serve 10 models) and ensuring consistency between data seen at training time and at inference time (the infamous train/serve skew).

    Reference solutions include Tecton, Feast (open source), Hopsworks, Databricks Feature Store and Vertex AI Feature Store.

  • Fine-tuning

    TermConcept

    Fine-tuning is the process of continuing the training of a pre-trained AI model (for example an LLM) on a dataset specific to a domain or task, in order to specialise its behaviour without starting from scratch.

    Several variants exist: classic supervised fine-tuning, RLHF (Reinforcement Learning from Human Feedback), DPO (Direct Preference Optimisation) and parameter-efficient techniques such as LoRA and QLoRA, which only update a small part of the model.

    In 2026, fine-tuning is still useful for niche cases (style, tone, domain vocabulary), but it is often superseded by RAG and good prompting, which are simpler to keep up to date.

  • FinOps

    MethodologyRole

    FinOps (a contraction of Finance + DevOps) is a cloud cost-management practice that gives technical teams financial accountability for their cloud usage, with the same continuous-improvement culture as DevOps.

    A FinOps Engineer (or FinOps Practitioner) tracks cloud spend by team and by product, identifies waste (oversized instances, forgotten resources, wrong service choices), sets up reserved instances and savings plans, and arbitrates between cost, performance and reliability.

    The FinOps Foundation, hosted by the Linux Foundation, formalises best practices. The role is exploding in 2026 with the rise of AI- and storage-related costs.

  • Fork

    Term

    A fork is an independent copy of a software project, made from its source code in order to evolve it in a direction different from the original project.

    In the open-source world, forking is common on platforms like GitHub or GitLab: a developer forks a repository to contribute back (via a pull request) or to start a derivative project. It is a fundamental mechanic of collaboration on public code.

    Famous examples of forks that became projects in their own right:
    - LibreOffice, fork of OpenOffice
    - MariaDB, fork of MySQL
    - io.js, fork of Node.js (since merged back)
    - Ubuntu, fork of Debian

  • Framework

    Tech

    Often described as a « working scaffold », a framework is a development skeleton paired with a toolbox.

    Always tied to a language, frameworks are often built by open-source communities. Each framework provides self-contained components that make application development easier by imposing a structure and a working discipline.

    Examples of well-known frameworks:
    - Symfony, Laravel and Zend for PHP
    - Spring and Spark for Java
    - Django, Pyramid or Flask for Python
    - React and VueJS for JavaScript
    - Rails (RubyOnRails) for Ruby

  • From scratch

    Term

    Quite simply: starting from nothing, starting from zero. In tech, you often hear « building an application from scratch ». It means designing and developing an application from a blank page, rather than building on top of the latest version of the app or a similar template.

    The upside of starting from zero is more freedom in design and only building what is strictly necessary, without dragging along useless apps, plug-ins or patches. The downside: it obviously requires more time and more skill.

  • Front-end

    TermConcept

    Front-end is the web development domain that covers the coded part visible on a web product: the client side. It involves building a user interface from design mockups. It is often compared to the visible part of the iceberg — the smaller, visible portion of an application's code.

    Examples of front-end languages:
    - HTML
    - CSS
    - JavaScript and its frameworks ReactJS, VueJS and AngularJS

  • Fullstack

    Term

    Fullstack usually describes a developer profile able to handle both back-end and front-end problems.

    Valued by some for their versatility, fullstack developers earn salaries broadly similar to back-end or front-end specialists, and are often found in early-stage companies where versatility matters more than deep expertise.

    As the technical team grows, the focus shifts to the expertise of specialised back-end or front-end developers.

    Some see the fullstack profile as a Swiss-Army-knife developer, able to operate at every stage of building a web product: from requirements gathering to delivery.

  • Gemini

    TechConcept

    Gemini is the family of multimodal AI models developed by Google DeepMind, launched in late 2023 to replace the PaLM and Bard models.

    Gemini is natively multimodal: a single model processes text, image, audio, video and code in a unified representation space. The Pro and Ultra versions compete with GPT and Claude on reasoning benchmarks, and Gemini stands out for its very long context window (up to several million tokens).

    The family is accessible via gemini.google.com, the Gemini API, Google AI Studio and Vertex AI on Google Cloud. It also powers Google Search (AI Overviews), Gmail and Workspace.

  • GitOps

    MethodologyTech

    GitOps is an operating model for managing infrastructure and application deployment in which a Git repository is the single source of truth for the desired state of the system.

    A GitOps operator (ArgoCD, Flux) runs continuously inside the Kubernetes cluster, watches the repository and automatically reconciles the actual state to the state declared in Git. Every change is made through a pull request: you get versioning, code review, audit and reproducibility for free.

    GitOps has become the standard for application deployment on Kubernetes, complementing IaC (Terraform, OpenTofu) which manages the underlying infrastructure.

  • GO

    LanguageTech

    Go is a compiled, concurrent programming language inspired by C and Pascal. It was developed at Google from an initial concept by Robert Griesemer, Rob Pike and Ken Thompson.

  • GPT

    TechConcept

    GPT (Generative Pre-trained Transformer) is the family of language models developed by OpenAI since 2018. It triggered the mass-market explosion of generative AI when ChatGPT launched in November 2022.

    GPT models are decoder-only transformers trained to predict the next token on huge text corpora, then fine-tuned with RLHF to follow instructions and respect safety constraints.

    The current versions (GPT-4, GPT-4o, GPT-5) are multimodal and accessible via chatgpt.com, the OpenAI API and the major clouds. The term « GPT » is often used loosely to refer to any LLM, but it remains a trademark owned by OpenAI.

  • GraphQL

    TechTerm

    GraphQL is a query language for APIs and a server-side runtime, created at Facebook in 2012 and open-sourced in 2015. It offers an alternative to REST in which the client describes exactly what data it needs, in a single request, rather than chaining multiple endpoints.

    A GraphQL API exposes a single typed schema; the client sends queries (reads), mutations (writes) and subscriptions (real-time) to a single endpoint. This eliminates the over-fetching and under-fetching typical of REST.

    The ecosystem is mature in 2026: Apollo, Relay, urql on the client side; Hasura, PostGraphile, Yoga, Apollo Server on the server side. GraphQL is often compared to tRPC in end-to-end TypeScript architectures.

  • Guardrails

    TermConcept

    Guardrails on an AI application are the set of controls placed around a model to constrain its behaviour: input filtering (prompt injection, forbidden content), output validation (toxicity, information leakage, expected format), limits on the tools accessible to an agent and policies for escalating to a human.

    They are essential in production because LLMs are not deterministic: a system without guardrails can hallucinate, leak sensitive data or be hijacked by a malicious user.

    Dedicated frameworks (Guardrails AI, NeMo Guardrails, AWS Bedrock Guardrails, Lakera, the moderation APIs from OpenAI / Anthropic) make implementation easier.

  • Hallucination (AI)

    TermConcept

    We say a generative AI model hallucinates when it produces false or invented information while presenting it with confidence — a fabricated citation, code that calls a non-existent API, a made-up historical fact.

    The phenomenon is inherent to LLMs, which generate by statistically predicting the next token rather than reasoning over verified facts. It gets worse outside the training distribution and on niche topics.

    The usual countermeasures are RAG (grounding generation in trustworthy sources), post-hoc verification (a second call that validates facts), explicit source citation, prompts that allow the model to say « I don't know », and continuous evaluation of outputs.

  • Helm

    TechTool

    Helm is the de-facto package manager for Kubernetes: it lets you package, distribute and install complex applications (with their deployments, services, ConfigMaps, secrets…) as a single artefact called a chart.

    A chart is a set of parameterisable YAML templates (via a `values.yaml` file) describing the Kubernetes resources to create. Helm handles versioning, upgrades, rollbacks and dependencies between charts.

    It is one of the graduated projects of the Cloud Native Computing Foundation (CNCF) and is used to distribute nearly every tool in the Kubernetes ecosystem (Prometheus, ArgoCD, cert-manager, ingress controllers…).

  • HTML

    LanguageTech

    HTML stands for « HyperText Markup Language ». It is used to create and represent the content of a web page and its structure. Other technologies are used alongside HTML to describe a page's presentation (CSS) and its interactive features.

  • HTTP / HTTPS

    Term

    HTTP stands for « Hypertext Transfer Protocol ».
    HTTP is a client–server communication protocol invented by Tim Berners-Lee. It defines and enables access to the web pages we call « the internet ».

    It is an evolution of FTP (File Transfer Protocol), which transferred files without considering their format. Information was sent, but it was up to the receiver to interpret images, sounds, text and so on.

    HTTP can read the data format (via MIME). Combined with the HTML language (designed to write hypertext documents, now called « web pages ») and web addresses (commonly known as URLs), these three inventions laid the foundation of the World Wide Web — the famous www. The internet for the general public was born in 1990 from this combination of innovations.

    What about HTTPS? It is an extension of HTTP, where the S stands for « secured »: it allows encrypted data exchange and makes interception or tampering impossible.

  • IaaS

    Term

    IaaS (Infrastructure-as-a-Service) is a cloud computing model in which a company rents IT infrastructure (compute, storage, network) from a cloud provider rather than buying and maintaining its own hardware.

    With IaaS, the company keeps control of its critical applications, security systems, databases and operating systems, but frees itself from physically running servers and datacenters. This maximises cost control while delivering greater scalability and agility.

    The difference between IaaS, PaaS and SaaS

    Each model covers a different type of resource and has its own distribution, billing and usage logic.

    • IaaS (Infrastructure as a Service): a set of raw computing resources offered by a cloud provider. Used to virtualise infrastructure or for resource-intensive projects: machine learning, big data, hosting.
    • PaaS (Platform as a Service): a platform delivered over the internet on which teams (usually developers) build applications without managing the underlying infrastructure.
    • SaaS (Software as a Service): the most widespread cloud service. Software runs on a provider's infrastructure; the user pays a licence and never deals with storage or hardware.

    Source: ovhcloud

  • IaC

    MethodologyTech

    IaC (Infrastructure as Code) is the practice of describing and provisioning IT infrastructure (networks, machines, databases, IAM…) using versioned code, rather than through manual clicks in a cloud console.

    IaC brings reproducibility, code review, audit, rollback and on-demand ephemeral environments (staging on demand). It comes in two flavours: declarative (you describe the desired state: Terraform/OpenTofu, Pulumi, AWS CloudFormation, Kubernetes manifests) and imperative (Ansible, Chef).

    It is a cornerstone of modern DevOps and a prerequisite for GitOps.

  • Inference

    TermConcept

    In machine learning, inference refers to the stage at which a trained model is used to make predictions on new data, as opposed to the training phase.

    For an LLM, an inference is a call that takes a prompt and returns a completion. It has a compute cost (often expressed in tokens), a latency (time to first token, tokens per second throughput) and a financial cost that can become significant at scale.

    Inference optimisation (quantisation, batching, KV cache, speculative decoding, distillation) has become an engineering discipline of its own, with specialised engines (vLLM, TensorRT-LLM, llama.cpp) and dedicated providers (Together AI, Fireworks, Groq).

  • Information System

    Term

    An information system (IS) is the set of social and technical resources used to collect, store, process and distribute information within an organisation.

    An IS is therefore made up of two sub-systems:
    - A social sub-system: the human organisation acting within the information system.
    - A technical sub-system: the technological equipment (hardware, software and networks).

    Technological innovation, by bringing automation and dematerialisation, gradually transforms these services. Automation replaces the social sub-system; dematerialisation replaces the technical sub-system.

  • Integration

    Concept

    IT integration, or systems integration, is the act of connecting data, applications, APIs and devices across your IT estate in order to boost efficiency, productivity and agility.

    Integration lets all the pieces of an IT environment work together. It is therefore a key part of any business transformation — its ability to adapt to a changing market.

    Integration goes beyond just connecting things — it adds value. By linking the features of multiple systems, it can unlock new capabilities.

    Not to be confused with continuous integration, a development practice where developers merge working code versions into a shared central repository several times a day.

    The goal of continuous integration is to automate the creation and verification of new versions in order to catch errors early and accelerate development.
    Source: redhat

  • IoT

    Term

    An acronym for « Internet Of Things », IoT refers to the interconnection of connected objects exchanging data over the internet. A connected fridge, a connected scale, a smartphone or a blood-glucose sensor are all part of the IoT.

    Because IoT is about interconnection between objects, people refer to the IoT sector, IoT technologies or IoT challenges. IoT brings several challenges of its own:
    - Application development: programming the firmware that receives sensor data before sending it to the server side. People often talk about embedded systems.
    - Data transfer: back-end and SQL challenges to make sure data is stored and accessible.
    - Data display: usually as dashboards — making the data readable to the end user through an understandable interface. This is a front-end or mobile challenge depending on the device.

  • IP Address

    TermConcept

    An IP address (Internet Protocol address) is an identification number assigned permanently or temporarily to every device connected to the internet that uses the Internet Protocol. It is unique and lets the network identify the device.

  • IT Infrastructure

    TermConcept

    As the name suggests, infrastructure is the set of components needed to develop and operate a technical solution. It supports and coordinates all the resources of the technical environment.

    It is composed of:
    - hardware: computers, datacenters, routers, etc.
    - software: the technical applications that allow a web product to be developed, hosted and maintained — servers, operating systems and so on.
    - networks: the components (virtual and physical) that let the system's internal and external pieces talk to each other — cables, firewalls, internal connections, etc.

    Sometimes you'll hear about Cloud infrastructure, which is about virtualising anything that can be virtualised.

  • Java

    LanguageTech

    Java is an object-oriented programming language created by James Gosling and Patrick Naughton, then employees of Sun Microsystems. Sun was acquired by Oracle in 2009, which now owns and maintains Java.
    One of Java's particularities is that programs written in it are compiled to an intermediate binary representation that can run in a Java Virtual Machine (JVM), independent of the operating system.

  • JavaScript

    LanguageTech

    JavaScript was created in 1995 by Brendan Eich and embedded in the Netscape Navigator browser. It is a scripting language primarily used to build interactive web pages and, as such, is an essential part of web applications.

    Alongside HTML and CSS, JavaScript is at the heart of the languages used by web developers. The vast majority of websites use it, and most browsers ship a JavaScript engine to run it. JavaScript is also used server-side, with runtimes such as Node.js or Deno.

  • Kafka

    TechTool

    Apache Kafka is an open-source distributed event streaming platform, created at LinkedIn in 2011 and later handed over to the Apache Foundation.

    Kafka stores event streams in partitioned topics, persisted on disk and replicated, with per-partition ordering guarantees and a throughput of several million messages per second. Producers and consumers are decoupled and can replay history at will.

    It is the backbone of real-time data at most large platforms: change data capture (CDC), event sourcing, streaming ML pipelines, event-driven microservices. Confluent is its main commercial vendor; AWS MSK and Aiven offer managed Kafka.

  • Kanban

    Methodology

    Kanban — often pictured as three columns « To do », « Doing » and « Done » filled with coloured sticky notes — is a working method, or rather (like Scrum) a framework.

    It was conceptualised in 1950 by Taiichi Ōno, an industrial engineer at Toyota, to optimise car manufacturing.

    Now widely used in software development, it aims to avoid overproduction in industry — and therefore useless code in development — to reduce costs and lead times. Very visual, it also offers a simple read on project progress.

  • Kubernetes

    TechTool

    Kubernetes (often shortened to k8s) is an open-source container orchestrator, created at Google based on the internal experience with Borg and handed over to the Cloud Native Computing Foundation (CNCF) in 2015.

    It automates the deployment, scaling, resilience and networking of containerised applications on a cluster of machines. You declare the desired state (how many replicas, what resources, which services to expose); the control plane continuously reconciles the actual state towards that desired state.

    Kubernetes has become the de-facto standard for cloud-native deployment, available as a managed service on every major cloud (EKS, GKE, AKS) and on European providers (OVH, Scaleway, Clever Cloud).

  • Laravel

    FrameworkTech

    Laravel is an open-source web framework written in PHP, following the Model-View-Controller (MVC) pattern and fully built in object-oriented programming. It was created by Taylor Otwell in June 2011 and is released under the MIT licence, with its source code hosted on GitHub.

  • Layer 2

    TechTerm

    A Layer 2 (or L2) is a protocol built on top of a main blockchain (Layer 1, such as Ethereum) to process transactions faster and more cheaply, while inheriting the security of the L1.

    The main L2 types on Ethereum are rollups: optimistic rollups (Optimism, Arbitrum, Base) and zk-rollups (zkSync, Starknet, Linea, Polygon zkEVM), which aggregate thousands of transactions off-chain and then publish a compact proof to the L1.

    L2s have become the default environment for DeFi and most decentralised applications in 2026, with direct on-chain activity on Ethereum itself now in the minority.

  • Lead Dev / Lead Tech

    Role
  • LLM

    TechConcept

    An LLM (Large Language Model) is an AI model trained on enormous text corpora to predict the next word (or, more precisely, the next token) in a sequence. At scale, it produces coherent text, translates, summarises, reasons and generates code.

    Modern LLMs are based on the transformer architecture and typically have hundreds of billions of parameters. They are first pre-trained on raw text, then fine-tuned (instruction tuning, RLHF) to follow instructions and respect safety rules.

    Major examples in 2026: Claude (Anthropic), GPT (OpenAI), Gemini (Google), Mistral, Llama (Meta, open weights), Qwen (Alibaba). They are accessible via API or runnable locally for open-weights models.

  • LLMOps

    MethodologyRole

    LLMOps is a discipline derived from MLOps, dedicated to the specifics of operating LLM-based applications in production: managing prompts like code, continuous evaluation, observability of model calls, token cost management, monitoring hallucinations and drift, deploying versioned prompts or models.

    It covers cross-cutting tooling: prompt versioning (Promptfoo, PromptLayer), evaluation (Braintrust, Humanloop, LangSmith), observability (Langfuse, Helicone, Arize), secrets management and rate limiting — plus the full RAG stack (vector databases, ingestion, retrieval).

    It is one of the hottest topics in 2026: shipping a POC is easy; operating an LLM at scale and securely is still the real challenge.

  • Machine Learning

    Concept

    Machine learning is the set of methods and techniques that allow a machine (a computer) to learn autonomously from data. From there, it becomes clear that artificial intelligence and machine learning are tightly linked.

    Machine learning is one of the building blocks of AI. The progress of AI is largely driven by progress in machine learning. Conversely, traditional programming consists of giving a machine precise instructions following predefined rules. Machine learning is autonomous; programming is dependent on human instructions.

    Often handled by Data Scientists, machine learning is rooted in using large datasets — hence its strong link with Big Data. It relies on algorithms that discover, from a dataset, recurring patterns called patterns.

    By uncovering these patterns, the algorithms use their own outputs to learn, evolve and improve (hence « learning »). Developing a machine learning algorithm typically involves four steps:
    - Prepare the training ground: provide the model with a clean dataset on which it will base its learning (photos, numbers, words, etc.). The quality of the training data determines the quality of what the algorithm learns.
    - Pick the trainee algorithm: a number of machine learning algorithm families are tailored to different use cases. Among them: regression, linear, clustering algorithms, neural networks (this is what's called Deep Learning).
    - Train it: a form of calibration. You run the chosen algorithm over the prepared dataset and compare its outputs to expected results.

    If the gap is too wide, you tune the model until you reach the expected outcome. The model is now calibrated and ready to be used in its domain. Example use case: spotting spam emails.

  • Marketplace

    Term

    As the name suggests, a marketplace is a digital « market square ». It is also a business model.

    Picture a market square where sellers of food products gather, selling more or less the same goods but each working independently, paying a small contribution to the local town hall for the right to occupy a stall.

    Digitise that square as a website and replace the fruit & vegetable stalls with sellers of any kind of product. Each stall pays a cut of its revenue to the marketplace that hosts it — and there you have a digital marketplace.

    The French marketplace CDiscount is a good example. They build the platform and the complex technology around it, then each brand chooses to offer its products, paying a commission on each sale.

    First adopted by retail and consumer-to-consumer sales (eBay, Amazon, Rakuten…), the marketplace model quickly spread to any type of service (rides for Blablacar, accommodation for Airbnb, and so on).

  • MCP

    TechTerm

    MCP (Model Context Protocol) is an open protocol, published by Anthropic in November 2024, that standardises how an AI application (assistant, agent, IDE) connects to external context sources: databases, APIs, filesystems, SaaS tools, internal services.

    An MCP server exposes resources (reads), tools (actions) and prompts (templates) that an MCP client (Claude Desktop, Cursor, VS Code, Claude Code…) can consume in a uniform way. This avoids every AI application having to reinvent its integrations.

    MCP has become, in 2025, the de-facto standard for agent tooling, with a large catalogue of open-source servers maintained by vendors (GitHub, Linear, Notion, Slack, Stripe…).

  • MFA / SSO

    TermConcept

    MFA (Multi-Factor Authentication) is a security mechanism that requires at least two proofs of identity to authenticate a user, picked from: something they know (a password), something they have (a phone, a YubiKey) or something they are (biometrics).

    SSO (Single Sign-On) lets a user sign in once at an identity provider (Google Workspace, Microsoft Entra ID, Okta) and then access all federated applications without re-entering a password, using protocols such as SAML 2.0, OpenID Connect or OAuth 2.

    MFA and SSO combine: SSO simplifies the experience, MFA hardens the central authentication factor. Passkeys (FIDO2/WebAuthn) are replacing passwords + OTP with phishing-resistant, password-free authentication.

  • Microservice

    MethodologyTerm

    The microservice is a software architecture approach (hence the name microservices architecture) often contrasted with monolithic architecture. Microservices architecture can be seen as an evolution of SOA (Service-Oriented Architecture).

    This style of architecture splits an application into multiple microservices, each independent of the others and specialised in a business-oriented task (search, payment, activity history, etc.). Each microservice is independent, meaning it has its own environment and its own code, often packaged inside containers managed via Docker.

    It communicates with the client or with other microservices through an API, the symbol of its independence. A microservices architecture brings full agility to development by allowing very rapid evolution: it is easy to evolve a feature by modifying the corresponding microservice, rather than touching the entire application and increasing the risk of bugs or outages.

    If something breaks, only the modified microservice is affected — others keep running. In the same way, it's easier to measure the performance of an isolated microservice and therefore of a single feature.

    Another benefit of microservices hosted in containers is the ability to duplicate them — and their environment — quickly to handle a temporary spike (a surge of visitors during Black Friday on an e-commerce site, for example), then scale back down. This enables a strong ability to adapt to demand.

    Splitting microservices by business domain also makes it possible to form small, specialised teams around them. A microservices architecture therefore has many advantages for applications that need to evolve frequently and quickly (agility).

    However, it is described as a complex system of simple microservices: it requires costly infrastructure, complex setup and an organisational structure that enables fast, easy communication so teams stay in sync.

  • Mistral

    TechConcept

    Mistral AI is a French company founded in 2023 by former Meta and Google DeepMind researchers, which builds a family of LLMs — some of them distributed as open weights (freely downloadable and runnable locally).

    Mistral's models (Mistral, Mixtral, Codestral, Mistral Large, Magistral…) cover generalist use cases, code and reasoning. The company also runs its Le Chat product and a Europe-hosted API — a differentiator for organisations bound by sovereignty constraints.

    Mistral has become, in 2026, the leading European champion of generative AI, valued at several tens of billions of euros.

  • ML Engineer

    Role

    The Machine Learning Engineer (ML Engineer) is responsible for turning a machine-learning model (often prototyped by a Data Scientist) into a robust production system: packaging, deployment, monitoring, updates.

    They combine Data Scientist skills (statistics, modelling) with Software Engineer skills (solid Python, tests, CI/CD, containers, cloud). They typically know the major ML frameworks (PyTorch, TensorFlow, scikit-learn), MLOps tools (MLflow, Weights & Biases, Vertex AI, SageMaker) and the basics of data engineering.

    Not to be confused with an AI Engineer (more LLM application-focused) or an MLOps Engineer (more infrastructure-focused).

  • MLOps

    MethodologyTech

    MLOps (Machine Learning Operations) is the set of practices applying DevOps principles to the lifecycle of machine-learning models: versioning code, data and models; CI/CD for training and deployment; monitoring quality and drift; reproducibility of experiments.

    It responds to a sobering reality: most ML models built never make it to production, and those that do drift silently as the data evolves.

    Reference tools include MLflow, Kubeflow, Vertex AI, SageMaker, Weights & Biases, Metaflow and DVC for data versioning. MLOps precedes and complements LLMOps.

  • MLOps Engineer

    Role

    The MLOps Engineer is an engineer specialised in productionising and operating machine-learning models: they build and run the platform and pipelines that let Data Scientists and ML Engineers ship their models continuously.

    They are close to an SRE (resilience, monitoring, infra-as-code, Kubernetes) and to a data engineer (pipeline orchestration, managing large volumes, feature stores). They are responsible for the SLAs of ML services in production.

    A role in very high demand in 2026, as AI gets embedded in products everywhere.

  • Monorepo

    MethodologyTerm

    A monorepo is a single code repository that contains several logically distinct projects (applications, services, shared libraries), as opposed to a polyrepo setup where each project lives in its own repo.

    Upsides: atomic refactoring across the whole codebase, easy code and config sharing, global visibility, coordinated CI/CD. Downsides: the size of the repo, and the complexity of build tools that must understand internal dependencies to rebuild/retest only what changed.

    The JavaScript/TypeScript ecosystem has dedicated tools: Turborepo, Nx, Pnpm workspaces, Yarn workspaces. Bazel and Pants cover polyglot monorepos at very large scale (Google, Meta, Stripe, Shopify…).

  • Multimodal

    TermConcept

    An AI model is called multimodal when it can understand and/or generate several types of data at once: text, image, audio, video — and sometimes other signals like code or structured data.

    Modern multimodal models (GPT-4o, Claude, Gemini) handle these modalities in a unified representation space. You can ask a question about a photo, transcribe and respond to an audio file, or generate text from a video.

    This paradigm gradually replaces the historical pipelines where each modality had its own specialised model (OCR then NLP, ASR then NLP…). In 2026, almost every frontier LLM is natively multimodal.

  • MVC

    Term

    MVC (Model-View-Controller) is one of the most widely used software architectures for web applications. It supports building a web app by neatly splitting a project into three parts. It is made of three modules: model, view, controller.

    Composition of the MVC architecture
    - Model: the core of the application, which handles data, retrieves information from the database and organises it so it can be processed by the controller.
    - View: the graphical component of the interface, which presents the model's data to the user.
    - Controller: the decision-making component that handles the logic; it is the intermediary between the model and the view.
    Source: rosedienglab.defarsci.org

  • MySQL

    Tech

    MySQL is a relational database management system (RDBMS). It is distributed under a dual GPL/proprietary licence. It is one of the most widely used database engines in the world, used by both the general public (web applications mostly) and professionals, competing with Oracle, PostgreSQL and Microsoft SQL Server.

    Its name comes from co-creator Michael Widenius' daughter, My. SQL refers to Structured Query Language, the query language used.

  • Next.js

    FrameworkTech

    Next.js is an open-source React framework published by Vercel, now the de-facto standard for building modern web applications with React.

    It provides server-side rendering (SSR), static generation (SSG), incremental rendering (ISR), filesystem-based routing, Server Components, Server Actions, automatic image and font optimisation, and native edge deployment.

    Since Next.js 13 (App Router) and through 14, 15 and 16, the framework has been restructured around React Server Components and a declarative cache model. It is used by Notion, OpenAI, Anthropic, TikTok, Hulu and most new production React projects.

  • NFT

    TermTech

    An NFT (Non-Fungible Token) is a unique, tamper-proof digital asset whose ownership is recorded on a blockchain. Unlike a regular cryptocurrency, each NFT is distinct and not interchangeable with another.

    NFTs are defined by standards such as ERC-721 and ERC-1155 on Ethereum. They were first popularised by digital art and collections (CryptoPunks, Bored Ape) before extending to more structural use cases: event tickets, ownership certificates, digital identity, video-game items.

    After a speculative bubble in 2021–2022, the market has consolidated around utility-driven use cases.

  • NIS2

    TermConcept

    The NIS2 directive (Network and Information Security 2) is the European text that strengthens the cybersecurity level of essential and important entities across 18 sectors (energy, transport, banking, health, digital infrastructure, cloud providers, MSPs…). It replaces the 2016 NIS directive and has been in force since October 2024.

    The entities concerned must put in place cyber governance at board level, minimum technical measures (MFA, vulnerability management, software supply-chain security), incident notification within 24/72 hours to the ANSSI (in France) and register with the relevant authority.

    Sanctions can reach 10 million euros or 2% of global turnover.

  • NodeJS

    FrameworkTech

    NodeJS is a server-side environment based on JavaScript. It was created in 2009 by Ryan Dahl, who wanted to improve the file upload progress bar on Flickr.

    In a few years, Node became a reference for JavaScript developers and the community kept growing. Node is open source and gets several releases each year.

    Strictly speaking, NodeJS is not a server environment per se. It is mostly about running and processing JS projects/applications on the server side rather than the client side (the browser). The principle is the same as PHP or a Ruby site: the code runs server-side.
    Then you use plain HTTP to reach your JavaScript application.
    Source: zdnet

  • Observability

    MethodologyConcept

    Observability is the ability to understand the internal state of a distributed system solely from its external outputs: logs, metrics, traces — and, more recently, profiles and events.

    It goes beyond simple monitoring (which answers « is the system working? ») to address « why is the system behaving this way, particularly in this case I hadn't anticipated? ».

    Reference platforms include Datadog, Grafana (Loki + Prometheus + Tempo), Honeycomb, New Relic, Splunk — and the open standard OpenTelemetry, which has emerged in 2025 as the universal instrumentation layer.

  • OKR

    Methodology

    OKRs (Objectives and Key Results) are a goal-setting method popularised by Andy Grove at Intel and then spread by Google. For a given period (often a quarter), you define an ambitious qualitative objective and 3 to 5 measurable key results that prove you've achieved it.

    A well-formed OKR is inspiring (the objective), precise and quantifiable (the KRs), vertically aligned (team OKRs feed into company OKRs) and public (internal transparency).

    They are particularly common in tech organisations and startups. Beware of the OKR theatre syndrome: if KRs are systematically hit at 100%, they are probably not ambitious enough.

  • Open Source

    Term

    Today used generically to describe anything (tech, service, object…) that is openly available, the term « open source » originates from open-source software.

    An open-source piece of software is decentralised: its code is available online. Anyone can take it, use it or modify it to fit their needs. Reputable, widely used open-source software is typically built by a community that develops it and evolves it.

    Everyone contributes a brick to the wall, with peer review along the way. Some private companies release part of their code as open source and base their business model on selling modules/extensions or on a closed-source core. This lets users pay for a technology and then tailor the interface using the open code.

    A few examples of well-known open-source technologies:
    - The Firefox and Tor browsers
    - The Keepass password manager
    - The Gimp image-editing software
    - The Linux operating system
    - The Ansible and Kubernetes automation tools

  • OpenTelemetry

    TechTool

    OpenTelemetry (often shortened to OTel) is an open-source instrumentation standard for applications, hosted by the CNCF, that defines a uniform API and SDK to collect logs, metrics and traces, independently of the observability vendor used downstream.

    The idea is to decouple instrumentation (which you don't have to redo if you switch tools) from the analysis backend (Datadog, Grafana, Honeycomb, New Relic…). A central collector receives signals from every application and routes them to one or several destinations.

    OpenTelemetry is, in 2026, the uncontested standard for cloud-native instrumentation, supported natively by every major framework and APM vendor.

  • Operating System / OS

    Term

    An operating system (OS) is the software that drives the hardware of an electronic device and receives instructions from the user or from other software.

    In a computer, the OS manages the processor(s) and memory. It runs peripherals (keyboard, mouse, screen, hard drive, card reader…) and exposes a user interface (windows, file manager, etc.).

    In a camera or a connected object, it plays the same role of interface between the hardware and the applications.

    Well-known examples of OSes

    For computers:
    - Linux
    - Windows (Microsoft)
    - macOS (Apple)

    For smartphones:
    - iOS (Apple)
    - Android (Google)

    The OS is the interface between applications and hardware. If you compare a computer to the human body, hardware is the body and the OS is the brain: without a brain, the body is inert; without an OS, a computer is just plastic and metal.

    Source: futura-sciences.com

  • ORM

    Term

    An ORM (Object-Relational Mapping) is a type of program that sits between an application and a relational database to simulate an object-oriented database.

  • PaaS

    Term

    Platform-as-a-Service (PaaS) is a cloud computing offering in which a service provider delivers a platform to its customers, letting them develop, run and manage business applications without building and maintaining the infrastructure typically required by software development. PaaS and serverless services usually bill only for the compute, storage and network resources consumed.
    Source: oracle.com

  • Pentest

    MethodologyTerm

    A pentest (or penetration test) is an offensive security audit during which an expert (the pentester), with written authorisation, simulates attacks against a target (web application, API, infrastructure, mobile app, cloud environment) to identify exploitable vulnerabilities before a real attacker does.

    Pentests come in flavours: black box (no initial access), grey box (limited access, for example a user account) and white box (full access to code and documentation). The deliverable is a prioritised report with proof of exploitation and remediation recommendations.

    Pentests are complementary to static analysis (SAST), dynamic analysis (DAST), bug bounties and configuration audits.

  • PHP

    LanguageTech

    _PHP: Hypertext Preprocessor_, better known as PHP, is an open-source programming language used primarily to produce dynamic web pages through an HTTP server. It can also run like any locally interpreted language.

    PHP is an imperative, object-oriented language. PHP has powered a great number of famous websites, including Facebook and Wikipedia. It is considered one of the foundations of so-called dynamic websites — and of web applications more broadly.

  • PING

    Term

    Ping is the name of a computer command used to test whether another machine can be reached over an IP network. The command also measures how long the response takes — the round-trip time.

  • Platform Engineer

    Role

    The Platform Engineer is an engineer dedicated to building and evolving an Internal Developer Platform that acts as a self-service environment for product teams to ship their applications: deployment, observability, security, secrets management, environments, CI/CD.

    Platform engineering is an evolution of DevOps: rather than waiting for each team to build its own tooling, you invest in a central team that treats the platform as a product, with users (the developers), a roadmap and success metrics (DORA, lead time, deployment frequency…).

    A fast-growing role in 2026, with a dedicated ecosystem (Backstage, Port, Humanitec, Crossplane).

  • Product

    Role

    In tech (and in economics), a product is a good or a service tied to a production activity, meant to satisfy a need — usually in exchange for a price paid by the user.

  • Product Manager

    Role
  • Product Owner

    Role
  • Prompt Engineer

    Role

    The Prompt Engineer is a profile that emerged with the rise of LLMs, whose job is to design, test and optimise the prompts — the instructions given to generative AI models — to obtain reliable, safe and reproducible results in a given product context.

    The role has evolved: in 2026, it is rarely a full-time position in a product team, but rather a skill embedded in broader roles (AI Engineer, AI Product Manager, AI Designer). It combines an understanding of LLMs, product sensibility, experimentation rigour (prompt A/B testing, evals) and writing ability.

    Dedicated platforms (Promptfoo, PromptLayer, Braintrust) support this work.

  • Prompt Engineering

    MethodologyTerm

    Prompt engineering is the set of techniques used to phrase the instructions given to a generative AI model in order to steer its outputs towards the desired quality, reliability and behaviour.

    It covers patterns such as few-shot prompting (providing examples), chain-of-thought (forcing the model to reason step by step), structured output (forcing a JSON response), tool availability, the use of system versus user prompts, and explicit handling of failure cases (« if you don't know, say so »).

    It is today a cross-cutting skill rather than a job, but a differentiating one: the quality of an AI application depends as much on the prompt as on the underlying model.

  • Pure Player

    Term

    A pure player is a company whose business runs on and thanks to the internet. No stores, no physical shops, no in-person front office with customers. Commercial activity happens through a website or an app backed by a massive infrastructure.

    The term applies to the big tech companies across every sector. A few examples:
    - Amazon, the e-commerce giant. Amazon does run many warehouses to handle its stock and logistics, but it has no physical place where you can buy the products it sells.
    - Netflix or Spotify, for video and audio streaming.
    - Mediapart, a digital-only newspaper, is a pure player.
    The term « pure player » comes mostly from sectors transformed by digital innovation — retail being the textbook example.

  • PWA

    TermTech

    A PWA (Progressive Web App) is a web application that leverages modern Web standards to offer an experience close to a native app: installation on the home screen, offline operation through a service worker, push notifications, access to certain sensors.

    PWAs are distributed via the Web (a URL) rather than through app stores, which simplifies deployment and removes installation and review friction. They run on any device with a compatible browser.

    The model is mature in 2026, with solid support on Android and steady catch-up on iOS. Twitter, Spotify, Pinterest, Starbucks and Uber have all rolled out PWAs alongside their native apps.

  • Python

    LanguageTech

    Python is a programming language invented by Guido van Rossum. The first version of Python was released in 1991. Python is interpreted, meaning you don't need to compile it before running it.

    Python is both simple and powerful: you can write very small scripts, and thanks to its many libraries you can tackle much more ambitious projects.
    - Web: Python combined with the Django framework is a great technical choice for large website projects today.
    - System: Python is also often used by sysadmins for repetitive or maintenance tasks. By the way, if you want to build Java applications by coding in Python, you can — thanks to the Jython project.
    Source: python.doctor

  • Quick & Dirty

    Term

    A quick-and-dirty solution is a rough, sometimes inelegant or inadequate solution used to fix or paper over a problem quickly. It is usually faster to implement than a clean solution, at the cost of technical debt.

    Quick-and-dirty solutions are usually about working around a problem rather than solving its root cause. The approach is also used for prototyping or proof-of-concept validation (POC).

    Anecdote: Microsoft's very first operating system, MS-DOS, was bought from Seattle Computer Products in 1981. Its original name was Q-DOS — for Quick and Dirty Operating System.

  • RAG

    TechMethodology

    RAG (Retrieval-Augmented Generation) is an architectural pattern for LLM applications in which, before answering a question, the system fetches relevant passages from a corpus of documents in order to ground the generation in trustworthy sources.

    A typical RAG pipeline has three stages: (1) ingestion — splitting documents into chunks and computing embeddings stored in a vector database; (2) retrieval — for each question, retrieving the chunks semantically closest to it; (3) generation — the LLM answers the question with the chunks provided in its context.

    RAG is the standard answer to the problem of hallucinations and to the dated knowledge of LLMs. It is simpler to keep up to date than fine-tuning.

  • ReactJS

    LanguageTech

    React (also known as React.js or ReactJS) is an open-source JavaScript library developed by Facebook since 2013. Its main purpose is to make it easier to build single-page web applications, by creating components that depend on state and produce a (portion of) HTML on every state change.

  • Responsive

    ConceptTerm

    Literally translated as « responsive », the term describes a web page (or app) that adapts to the user's device.

    Smartphones, tablets, computer screens or screens on connected objects each come in different sizes depending on brand and model. Coding a responsive page means coding a page that adapts to the screen or format while keeping a high-quality user experience: aligned text, sharp photos, suitable menus, intuitive flows, and so on.

    Try visiting the same web page from a smartphone, a tablet and a desktop with a large screen, and compare how the same page is organised. We speak of « Responsive Web Design » — a challenge that front-end developers address through the CSS language.

  • RGPD / GDPR

    TermConcept

    GDPR (General Data Protection Regulation, known as RGPD in French) is the European regulation, in force since May 2018, that governs the processing of personal data of European Union residents — regardless of the country where the processing organisation is based.

    It grants a set of rights to individuals (access, rectification, erasure, portability, objection) and sets obligations on organisations: a legal basis for each processing activity, privacy by design, a processing register, a Data Protection Impact Assessment (DPIA) for risky processing, breach notification within 72 hours, appointment of a DPO in some cases.

    Fines can reach 20 million euros or 4% of global turnover. The CNIL is the supervisory authority in France.

  • RLHF

    TechConcept

    RLHF (Reinforcement Learning from Human Feedback) is a technique for aligning LLMs: after pre-training, the model is fine-tuned using comparisons made by humans between multiple possible responses, so that it adopts the desired behaviours (helpful, honest, harmless).

    The process typically has three steps: supervised fine-tuning on human demonstrations, training a reward model that learns human preferences, and then optimising the LLM against that reward model with PPO or DPO.

    RLHF is what made ChatGPT usable and took LLMs from the lab to the mainstream. Anthropic proposed a variant with Constitutional AI (RLAIF), in which part of the feedback is produced by other models following explicit principles.

  • Ruby

    LanguageTech

    Ruby is an open-source programming language created in 1995 by Yukihiro « Matz » Matsumoto. It is interpreted, object-oriented and multi-paradigm. The language was standardised in Japan in 2011 and by the International Organisation for Standardisation in 2012. Its main framework is Rails (hence Ruby On Rails). Ruby is known for being easy to learn.

  • SaaS

    Term

    An acronym for « Software As A Service ». SaaS is, above all, a new business model for software, making it available online — hosted in the cloud rather than on the end user's machine.

    Beyond the lower costs for the company that owns the product, online hosting enables quick deployment and real-time evolution and measurement.

    Well-known SaaS examples:
    - Slack
    - Salesforce
    - Stripe
    - Trello
    - Zendesk.

  • SAFe

    Methodology

    SAFe (Scaled Agile Framework) is a proprietary methodological framework aimed at applying agile principles at the scale of large organisations, where dozens to hundreds of teams work on the same product portfolio.

    It defines multiple levels (team, programme, portfolio, large solution), a central event — the quarterly PI Planning that synchronises dozens of teams — and a set of roles (Release Train Engineer, Product Manager, System Architect…).

    It is the most widely deployed scaling framework in large groups (banks, telcos, industry), but also the most criticised by the agile community for its weight and its distance from the Agile Manifesto's spirit. Alternatives include LeSS, the Spotify Model and the simple « scrum of scrums ».

  • SAST / DAST

    MethodologyTool

    SAST (Static Application Security Testing) and DAST (Dynamic Application Security Testing) are the two main families of application-security analysis tools.

    SAST analyses source code or binaries without executing them, to detect structural vulnerabilities (SQL injection, XSS, hardcoded secrets, bad cryptographic practice). It fits naturally into CI. Tools: SonarQube, Snyk Code, Semgrep, Checkmarx, GitHub CodeQL.

    DAST tests the running application from the outside, the way an attacker would: it sends malicious requests and observes responses. It catches runtime vulnerabilities invisible in the code (misconfigurations, authentication issues). Tools: OWASP ZAP, Burp Suite, Tenable, Acunetix.

    The two are complementary and combine with software composition analysis (SCA), container scanning and IaC scanning.

  • SBOM

    TermConcept

    An SBOM (Software Bill of Materials) is a formal, machine-readable inventory of every component — libraries, direct and transitive dependencies, versions, licences, hashes — that makes up a piece of software.

    It has become central to software supply-chain security: without an SBOM, it is impossible to know quickly whether a newly disclosed vulnerability (such as Log4Shell) affects your applications. With an SBOM, you can automatically query the list of affected components.

    The standard formats are SPDX (Linux Foundation) and CycloneDX (OWASP). Several regulations make SBOMs mandatory for some markets (Executive Order 14028 in the United States, the Cyber Resilience Act in Europe for products with digital components).

  • SCRUM

    Methodology

    SCRUM is an Agile method designed in 1995 by Jeff Sutherland and Ken Schwaber, themselves inspired by Nonaka and Takeuchi's article « The new rules of new product development ».

    Sometimes considered an approach or a framework rather than a method, SCRUM lets teams run complex, evolving projects while delivering products with high added value.

    SCRUM is built on three pillars:
    - Transparency between every team member on all aspects of the project, especially the obstacles they hit… which enables…
    - Inspection of those obstacles, which can be described as analysis and study of the issue… which leads to…
    - Adaptation to obstacles and to change.

    SCRUM splits the project team into three groups: the Product Owner, who represents user needs and the product vision. The development team — at a minimum developers and UX/UI Designers, plus any other tech profiles the project needs.

    The SCRUM Master, who ensures SCRUM is applied correctly and consistently. They run « ceremonies » and train team members in SCRUM.

  • SCRUM Master

    Role
  • Server Components

    TermTech

    React Server Components (RSCs) are a new kind of React component, stable since 2024, that run exclusively on the server and are never hydrated on the client. They can access server resources directly (database, filesystem, secrets) and send the browser a serialised description of their render tree — finer-grained than HTML.

    The main benefit is reducing the JavaScript shipped to the client: anything that doesn't need to be interactive stays on the server. Client components (`"use client"`) are opt-in.

    Next.js (App Router) is the reference implementation. The pattern is gradually replacing the traditional full-page SSR + hydration approach on the React side.

  • Serverless

    TechTerm

    Serverless is a cloud execution model in which the developer deploys code without having to provision or manage servers: the platform allocates capacity on demand, scales it automatically and bills by actual consumption (often per millisecond and per invocation).

    Serverless covers several categories: on-demand functions (AWS Lambda, Cloudflare Workers, Vercel Functions, Google Cloud Run), serverless databases (Neon, Aurora Serverless, DynamoDB, Turso), edge rendering and a number of managed services (queues, storage).

    It shines on sporadic or unpredictable workloads; on very steady, predictable workloads, the cost and constraints (cold starts, limited runtime) can make a traditional instance more appropriate.

  • Service Mesh

    TechTerm

    A service mesh is an infrastructure layer dedicated to inter-microservice communication, transparently handling routing, observability, security (mTLS), resilience (retries, timeouts, circuit breakers) and traffic management (canary, A/B) — without changing the application code.

    It typically relies on injecting a sidecar (a proxy like Envoy) next to each service, driven by a central control plane. The service mesh offloads distributed network complexity from the application.

    The major implementations are Istio, Linkerd and Consul. In 2026, the rise of sidecar-less designs (Istio Ambient, Cilium Service Mesh based on eBPF) further simplifies operations.

  • Shape Up

    Methodology

    Shape Up is a product-management method published by Basecamp (Ryan Singer) in 2019, positioned as an alternative to Scrum for product teams looking for autonomy and pragmatism.

    It is organised around 6-week cycles (followed by 2 weeks of cool-down to handle debt and explore), pitches (problems shaped — bounded, raw but clear — picked by leadership) and small autonomous teams that commit to a fixed appetite (how much time we want to invest) rather than a fixed scope.

    It is widely adopted in product startups where quarterly cadence, autonomy and the absence of granular estimates match the culture.

  • Singleton

    Term

    A Singleton is a creational design pattern that guarantees a class only ever has a single instance, while providing a global access point to it.
    Source: refactoring.guru

  • Smart Contract

    TechTerm

    A smart contract is a computer program deployed on a blockchain that executes deterministically and automatically when its conditions are met, without a trusted third party.

    Smart contracts are the foundational brick of DeFi, NFTs, DAOs and, more broadly, every decentralised application (dApp). They are written in dedicated languages — Solidity for Ethereum and its Layer 2s, Rust for Solana and NEAR, Move for Aptos and Sui.

    Once deployed, a smart contract is generally immutable: a bug can be catastrophic. That is why specialised security audits and bug bounties are a mandatory step before mainnet deployment.

  • Snowflake / BigQuery

    Tech

    Snowflake and BigQuery are the two main cloud data warehouses used in 2026 for large-scale analytics: they store and query terabytes to petabytes of data via SQL, with native separation of storage and compute, near-instant elasticity and pay-as-you-go billing.

    Snowflake (launched in 2014) is multi-cloud (AWS, Azure, GCP), with a virtual warehouses architecture that isolates workloads. BigQuery (Google, launched in 2010) is GCP-native, fully serverless and billed per scanned byte or per slot.

    They are the reference targets of the modern data stack (ingestion via Fivetran/Airbyte, transformation via dbt, BI via Looker/Lightdash/Metabase) and now compete with lakehouse alternatives (Databricks, Trino + Iceberg).

  • SOA Architecture

    TechTerm

    Service-Oriented Architecture (SOA) is a design model that makes software components reusable through service interfaces that share a common language to communicate over a network.

    A service is a self-contained unit of software functionality (or a set of features) designed to perform a specific task such as fetching information or executing an operation. It bundles the code and data integrations needed to deliver a complete, distinct business function.

    It can be accessed remotely, and updated or interacted with independently. In other words, SOA lets software components that are deployed and managed separately communicate and work together to form software applications shared across different systems.
    Source: redhat

  • SOC 2

    TermConcept

    SOC 2 (Service Organisation Control 2) is a US audit standard issued by the AICPA that has become an international benchmark for proving the maturity of a SaaS provider with respect to customer data management.

    It is structured around five Trust Service Criteria: security (mandatory), availability, processing integrity, confidentiality and privacy. The report is produced by an independent audit firm and exists in two flavours: Type I (point in time) and Type II (over a 6–12 month period).

    Many B2B companies require a SOC 2 Type II before signing a contract with a SaaS provider that hosts their data. Platforms such as Vanta, Drata or Secureframe automate much of the compliance process.

  • Software Craftsmanship

    Term

    Software craftsmanship is an approach to software development that emphasises code quality and the technical skill of developers. It positions itself as a response to recurring industry ills and to outsourcing trends that prioritise financial concerns over developer responsibility.

    The movement champions the craft side of development: per the software craftsmanship manifesto, it isn't enough for software to be functional — it must also be well designed.

    The core idea is to guarantee the reliability and maintainability of applications — hence the importance of professionals who can design software in line with quality standards.

    Software craftsmanship and agility are complementary: where agility focuses on flexible delivery cycles, software craftsmanship focuses on how the code itself is designed and written.

  • Solid

    FrameworkTech

    Solid (often SolidJS) is an open-source reactive JavaScript library created by Ryan Carniato. It offers a syntax very close to React (JSX, components) but is built on a fine-grained reactivity system, with no Virtual DOM.

    In Solid, components do not re-run on every state change: only the parts of the DOM that depend on a modified signal are updated. This delivers performance among the best on the market and a very small bundle.

    SolidStart is the equivalent of Next.js in the Solid ecosystem. The project has a smaller community than React but has influenced the recent evolution of several other frameworks (Svelte 5 runes, Vue Vapor).

  • Spotify Model

    Methodology

    The Spotify Model is a product organisation described by Spotify in 2012 in two articles by Henrik Kniberg. It structures teams into squads (autonomous cross-functional teams aligned on a mission), tribes (groupings of squads on the same domain), chapters (cross-cutting skills, e.g. all iOS engineers) and guilds (informal interest communities).

    The goal is to combine the autonomy of a startup with the shared resources of a large organisation. The model was massively copied in the 2010s — sometimes badly — and Spotify itself has since evolved and publicly acknowledged its limits.

    Don't adopt it to the letter — use it as inspiration to structure an autonomous product organisation.

  • SQLite

    Tech

    _SQLite_ is a library written in C that offers a relational database engine accessible through the SQL language.

  • SRE

    Role

    The SRE (Site Reliability Engineer) is an engineer dedicated to the reliability of production systems: availability, performance, latency, capacity, incident management and operations automation.

    The practice was formalised at Google and popularised by the book Site Reliability Engineering (2016). It introduced concepts now considered standard: SLIs (indicators), SLOs (objectives), error budget (the allowed error budget that arbitrates between new features and stabilisation), toil (the repetitive work to be automated), and blameless post-mortems.

    The SRE is close to DevOps but with a stronger emphasis on software engineering applied to operations (50% of time spent in code) and on quantified, measured reliability objectives.

  • Staff / Principal Engineer

    Role

    The Staff Engineer and Principal Engineer roles are senior technical-expertise levels — beyond Senior and Lead — that let an engineer keep growing without going into management.

    A Staff Engineer has cross-team or cross-product influence, drives structural architecture choices, mentors other engineers and bridges the gap between tech and business. The Principal Engineer extends that influence to an entire organisation, or even to an industry.

    The individual contributor tracks at Google, Meta, Stripe, Shopify and GitLab have popularised these titles in France, where they remain a minority but are growing fast — particularly in tech scale-ups.

  • Sublime Text

    Tool

    Sublime Text is a general-purpose text editor written in C++ and Python, available on Windows, Mac and Linux. The software was first conceived as a feature-rich extension to Vim.

    Since version 2.0, released on 26 June 2012, the editor supports 44 major programming languages, with plug-ins frequently available for less common languages.

  • Supply Chain Attack

    TermConcept

    A supply chain attack is an attack that does not target a victim directly but rather one of its upstream suppliers — software vendor, open-source library, MSP — to indirectly reach a large number of targets via a compromised update or component.

    Landmark incidents include SolarWinds (2020), Codecov (2021), Log4Shell (2021), 3CX (2023) and the xz-utils affair (2024), where a malicious maintainer injected a backdoor into a foundational Linux library.

    Countermeasures include SBOMs, artifact signing (Sigstore, in-toto, SLSA), build isolation, vendored dependencies and reducing the surface area of third-party dependencies.

  • Svelte

    FrameworkTech

    Svelte is an open-source web framework created by Rich Harris in 2016, with a radical approach: it is a compiler that turns components into optimised JavaScript at build time, rather than shipping a framework runtime in the browser.

    The result: very small bundles, excellent performance and a very readable syntax (HTML augmented with `{#if}` / `{#each}` blocks). Svelte 5 (2024) introduced runes, a new fine-grained reactivity model inspired by signals.

    SvelteKit is the equivalent of Next.js in the Svelte ecosystem. The framework is used by The New York Times, Apple, Spotify and consistently ranks among the most loved frameworks by developers (State of JS).

  • Symfony

    TechFramework

    Symfony is a set of PHP components and a free MVC framework written in PHP. It provides modular, adaptable features that make web development easier and faster. The French web agency SensioLabs originally built the framework as Sensio Framework.

    Tired of rebuilding the same user management, ORM, etc. over and over, they developed the framework for their own needs. Since these problems were the same for other developers, the code was eventually shared with the developer community.

    The project then became Symfony (in line with the creator's wish to keep the S and F initials of Sensio Framework), and then Symfony2 from version 2 onwards. Symfony 2 broke compatibility with the 1.x branch. From version 2 on, compatibility breaks between versions are documented to ease upgrades.
    On 5 September 2017, Symfony passed the one-billion-download mark.

  • Synchronous / Asynchronous

    TermConcept

    Whether we are talking about communication, training or tools, synchronous describes something that happens in real time and expects an immediate response. Conversely, asynchronous describes something that comes from intermittent, non-continuous exchange.

    A few examples make the definition simpler:
    - Asynchronous communication: emails and text messages. You send a question to someone, who can answer immediately or much later. We don't necessarily expect an instant response: we don't need to wait for one before sending another email or text.
    - Synchronous communication: video calls, phone exchanges, in-person or remote meetings. It is a direct, real-time exchange between participants. You can't (although some manage to remotely) carry out several conversations simultaneously.

    Special case: instant messaging tools (Slack, WhatsApp, Messenger…) whose name and interface might suggest synchronous communication are in fact asynchronous.
    A WhatsApp message is really just like an email or a text, dressed up in an interface that feels real-time.

    More examples:
    - Synchronous training: « classic » classroom training, on site or via a video call, is synchronous. Everyone gets the same information at the same time, and participants can interact with one another in real time.
    - Asynchronous training: online training where content is downloadable or available on demand. The teacher/trainer creates the material in advance and makes it available to learners, who go through it at their own pace. There are no live exchanges between participants — except, perhaps, on forums… which is, of course, asynchronous communication :)

  • Tailwind CSS

    FrameworkTech

    Tailwind CSS is an open-source utility-first CSS framework that provides a wide vocabulary of low-level utility classes (`flex`, `pt-4`, `text-center`, `bg-blue-500`…) you compose directly in HTML, rather than writing separate stylesheets.

    The approach removes the cost of naming and the risk of dead styles; a compiler only emits the classes you actually use, producing very small final CSS. The design system is fully configurable through a `tailwind.config` file.

    Tailwind v4, released in 2025, ships its own Rust engine, is configurable directly in CSS and has become the dominant CSS framework for new web projects.

  • TDD

    Methodology

    Test-Driven Development. TDD is an Agile development method that grew out of Test-First Design: writing tests before the code.

    It evolved into TDD, which is built around three laws aimed at writing code solely to make a test pass. TDD helps you focus on the actual need (« What does my application need to handle? In which cases? ») so that you write only what is necessary and avoid superfluous code.

    It's about answering exactly the need, keeping the code as simple as possible — and therefore easily evolvable and maintainable.

  • Terraform

    TechTool

    Terraform is an open-source infrastructure-as-code (IaC) tool published by HashiCorp since 2014. It lets you describe your cloud infrastructure (AWS, Azure, GCP, OVH, Cloudflare, Datadog…) in a declarative language (HCL) and provision it idempotently.

    A classic Terraform workflow is: `plan` (compute the diff between desired and actual state), `apply` (apply the changes), `state` (a file that remembers the state). The multi-provider format made Terraform the de-facto standard for multi-cloud.

    Following the licence change in 2023, part of the community moved to OpenTofu, the open fork hosted by the Linux Foundation, which remains compatible with existing Terraform configurations.

  • Token (AI)

    TermConcept

    In the context of LLMs, a token is the basic unit manipulated by the model: a chunk of text (often part of a word, sometimes a short whole word or a single character) produced by a tokenizer before inference.

    A French text of 1,000 characters typically represents between 250 and 350 tokens. LLMs bill usage based on the number of input and output tokens, and their context window is also expressed in tokens.

    The choice of tokenizer (BPE, SentencePiece, Tiktoken…) influences performance on non-English languages: a tokenizer poorly optimised for French can consume many more tokens per character than a well-suited one.

  • Transformer

    TechConcept

    The transformer is the neural-network architecture introduced by Google in the paper Attention Is All You Need (2017), which underpins practically every modern generative AI model: LLMs, image, code and multimodal models.

    Its key innovation is the attention mechanism: at each step, the model dynamically weighs the relevance of each input element against the others, without depending on a sequential traversal like the older RNNs/LSTMs. This enables massive parallelism during training.

    GPT, BERT, Llama, Claude, Gemini, Mistral, Stable Diffusion are all built on transformer variants. Alternative architectures (Mamba, RWKV, state space models) are emerging to address its quadratic cost in context length, but the transformer remains dominant in 2026.

  • tRPC

    FrameworkTech

    tRPC (TypeScript Remote Procedure Call) is an open-source framework for building end-to-end typed APIs between a TypeScript back-end and front-end, with no code generation or intermediate schema (no OpenAPI, no GraphQL).

    The server exposes procedures (queries, mutations, subscriptions); the client calls them as if they were local functions, with full autocomplete and type safety — including for parameters validated via Zod.

    tRPC shines in fullstack TypeScript monorepos (Next.js, T3 Stack) where types can be shared directly. It complements Next.js Server Actions well and is often compared to GraphQL for single-language contexts.

  • Trunk-Based Development

    Methodology

    Trunk-based development (TBD) is a version-control strategy where every developer integrates their changes directly into a single main branch (the trunk, often `main`) at least daily, rather than working on long-lived branches such as `develop` or feature branches that live for weeks.

    TBD relies on a strict CI (automated tests on every commit), feature flags to decouple deployment from release, and quick code reviews. It avoids the merge hells associated with long-running branches and drastically speeds up delivery.

    It is a foundational practice of continuous delivery and one of the DORA indicators of team performance; popularised by Google, it is now standard in high-performing teams.

  • Turborepo

    ToolTech

    Turborepo is an open-source incremental build orchestrator for JavaScript/TypeScript monorepos, acquired by Vercel in 2021.

    It analyses the dependency graph between packages in a monorepo, caches the results of every task (build, test, lint) locally and remotely, and only re-runs what actually changed. On monorepos with hundreds of packages, the CI time savings can be an order of magnitude.

    It competes with Nx (Nrwl) with a more minimalist philosophy. Since 2024, Turborepo relies on Turbopack for Next.js builds.

  • TypeScript

    LanguageTech

    TypeScript is an open-source programming language created by Microsoft in 2012 that adds a static type system on top of JavaScript. TypeScript code is compiled (technically transpiled) to JavaScript runnable in any browser or runtime.

    The type system is structural and fully optional (you can progressively type an existing JavaScript codebase). It eliminates a large class of errors at compile time, improves autocomplete and documentation, and makes refactoring large codebases far easier.

    TypeScript has become, in 2026, the default language of web development on both client and server, used by nearly every serious new project in the JavaScript ecosystem.

  • UX/UI Designer

    Role
  • Vector Database

    TechTool

    A vector database is a database specialised in storing and efficiently searching high-dimensional vectors (embeddings), via approximate nearest-neighbour (ANN) algorithms such as HNSW, IVF or DiskANN.

    They are the storage layer of RAG architectures: you index the embeddings of every chunk of a corpus and then retrieve, in milliseconds, the chunks semantically closest to a given question — even across millions of vectors.

    Reference solutions include Pinecone, Qdrant, Weaviate, Milvus, Chroma (open source) and Turbopuffer — plus extensions of existing databases: pgvector for PostgreSQL, Atlas Vector Search for MongoDB, Elasticsearch and OpenSearch.

  • Vite

    ToolTech

    Vite is an open-source front-end build tool created by Evan You (the author of Vue.js) in 2020, which quickly became the standard of modern JavaScript tooling.

    It combines an extremely fast development server based on native ES modules (the browser loads source files directly in HMR, without bundling) with a production build based on Rollup. It is compatible with React, Vue, Svelte, Solid, Lit, vanilla JS and most other frameworks.

    Vite is used internally by many frameworks (SvelteKit, Astro, Nuxt, some versions of Remix) as the build layer. The rolldown project, currently replacing Rollup, aims to bring even more performance through a Rust rewrite.

  • VP of Engineering

    Role
  • VueJS

    FrameworkTech

    Vue.js (often just Vue) is an open-source JavaScript framework used to build user interfaces and single-page web applications (SPAs).

    Created by Evan You in 2014, it is maintained by him and the core team working on the project and its ecosystem (Vue Router, Pinia, Nuxt). Vue is used by Adobe, Alibaba and GitLab, among others.

    It positions itself as an alternative to React and Angular, with a gentle learning curve and a very approachable reactive API.

  • Wallet

    TermTech

    A wallet is a piece of software that lets a user store their private cryptographic keys and interact with a blockchain: send and receive cryptocurrencies, sign transactions, connect to DeFi dApps.

    There are two main families: hot wallets (connected software, like MetaMask, Rabby, Coinbase Wallet) and cold wallets (offline hardware, like Ledger or Trezor) that offer far stronger security for larger holdings. Smart wallets (account abstraction, ERC-4337) make it possible to encode complex rules: social recovery, multisig, gas paid by a third party.

    In practice, the wallet is the user's on-chain identity — losing your seed phrase means losing access to your assets.

  • Web Architecture

    TechTerm

    Web architecture describes the organisation and structure of a web application. It defines the development blueprint for the app and how its building blocks communicate with each other.

    It is defined upfront, based on the application's needs, in order to make development of each module easier by providing a clear organisational logic.

  • WebAssembly (Wasm)

    TechTerm

    WebAssembly (Wasm) is a portable, low-level binary format standardised by the W3C, designed to run code at near-native performance in the browser — and now well beyond.

    Wasm is a compilation target for languages like Rust, C/C++, Go, Zig or AssemblyScript. It lets you bring to the Web applications historically reserved for the desktop (Figma, Photoshop Web, AutoCAD Web).

    Outside the browser, Wasm is becoming a universal runtime for edge serverless (Cloudflare Workers, Fastly Compute, WasmEdge), extensible plugins (Envoy, Istio, Shopify Functions) and serverless functions with near-zero cold start. The WASI standard opens access to the filesystem and network outside the browser.

  • Zero Trust

    MethodologyConcept

    Zero Trust is a security model that assumes no user, device or network flow should be considered trustworthy by default — even if it comes from the internal network — in contrast to the historical trusted perimeter model (firewall + VPN).

    In a Zero Trust architecture, every access to a resource is verified on each request based on identity (authenticated user, MFA), context (compliant device, geolocation, risk score) and least privilege (the user has access only to what they strictly need).

    The model is operationalised through ZTNA solutions (Cloudflare Zero Trust, Zscaler, Tailscale, Twingate), which gradually replace traditional VPNs in hybrid environments.