[{"content":"Teams often claim \u0026ldquo;we have a REST API in place.\u0026rdquo; But when you look at the actual JSON responses, there are no links anywhere. Just raw data. That\u0026rsquo;s not REST, it\u0026rsquo;s CRUD exposed over HTTP.\nThe difference comes down to one principle most developers overlook: HATEOAS.\nWhat Is HATEOAS in a REST API? HATEOAS stands for Hypermedia As The Engine Of Application State. It is one of the fundamental constraints of REST, defined by Roy Fielding in his 2000 dissertation, the same paper that coined the term REST itself.\nThe principle is straightforward: a REST client should not need to know the API routes in advance. It starts from an entry point and discovers available actions by following the links provided in each response.\nIt works exactly like browsing the web. You land on a page, read the available links, click, and move forward. You don\u0026rsquo;t manually type URLs at every step.\nCRUD over HTTP vs Real REST API Design Without HATEOAS, a typical response looks like this:\n{ \u0026#34;id\u0026#34;: 1, \u0026#34;status\u0026#34;: \u0026#34;pending\u0026#34;, \u0026#34;montant\u0026#34;: 1500.00, \u0026#34;client_id\u0026#34;: 7 } The client receiving this must already know:\nthat to validate, it needs to call POST /contrats/1/valider/ that to cancel, it\u0026rsquo;s DELETE /contrats/1/ that the client is accessible via GET /clients/7/ That knowledge is hardcoded on the client side. If the API changes a route, the client breaks. That\u0026rsquo;s not evolvability, it\u0026rsquo;s tight coupling in disguise.\nA Real HATEOAS Response With HATEOAS, the same response becomes self-descriptive:\n{ \u0026#34;id\u0026#34;: 1, \u0026#34;status\u0026#34;: \u0026#34;pending\u0026#34;, \u0026#34;montant\u0026#34;: 1500.00, \u0026#34;client_id\u0026#34;: 7, \u0026#34;_links\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34; }, \u0026#34;valider\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/valider/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34; }, \u0026#34;annuler\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;DELETE\u0026#34; }, \u0026#34;client\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/clients/7/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34; } } } The client no longer needs to know the routes. It reads the available links and knows which actions are possible given the current state of the resource. If the contract is already validated, the valider link simply does not appear in the response. The client doesn\u0026rsquo;t even have to check.\nHATEOAS in Practice: Links Reflect Resource State This is where HATEOAS becomes genuinely powerful. The links change based on the resource\u0026rsquo;s state:\n// Contract with status \u0026#34;pending\u0026#34; \u0026#34;_links\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34; }, \u0026#34;valider\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/valider/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34; }, \u0026#34;annuler\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;DELETE\u0026#34; } } // Same contract, status \u0026#34;validated\u0026#34; \u0026#34;_links\u0026#34;: { \u0026#34;self\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;GET\u0026#34; }, \u0026#34;resilier\u0026#34;: { \u0026#34;href\u0026#34;: \u0026#34;/api/contrats/1/resilier/\u0026#34;, \u0026#34;method\u0026#34;: \u0026#34;POST\u0026#34; } } The client does not code conditional logic (if status == \u0026quot;pending\u0026quot;: show_validate_button). It reads the available links and builds its interface accordingly. The API drives the application state, which is precisely what the name HATEOAS means.\nHATEOAS Implementation with Django REST Framework DRF does not include HATEOAS natively, but it can be implemented cleanly with a dedicated serializer:\nfrom rest_framework import serializers from .models import Contrat, ContratStatus class ContratSerializer(serializers.ModelSerializer): _links = serializers.SerializerMethodField() class Meta: model = Contrat fields = [\u0026#39;id\u0026#39;, \u0026#39;status\u0026#39;, \u0026#39;montant\u0026#39;, \u0026#39;client_id\u0026#39;, \u0026#39;_links\u0026#39;] def get__links(self, obj: Contrat) -\u0026gt; dict[str, dict[str, str]]: base = f\u0026#39;/api/contrats/{obj.pk}/\u0026#39; links: dict[str, dict[str, str]] = { \u0026#39;self\u0026#39;: {\u0026#39;href\u0026#39;: base, \u0026#39;method\u0026#39;: \u0026#39;GET\u0026#39;}, } if obj.status == ContratStatus.PENDING: links[\u0026#39;valider\u0026#39;] = {\u0026#39;href\u0026#39;: f\u0026#39;{base}valider/\u0026#39;, \u0026#39;method\u0026#39;: \u0026#39;POST\u0026#39;} links[\u0026#39;annuler\u0026#39;] = {\u0026#39;href\u0026#39;: base, \u0026#39;method\u0026#39;: \u0026#39;DELETE\u0026#39;} if obj.status == ContratStatus.VALIDATED: links[\u0026#39;resilier\u0026#39;] = {\u0026#39;href\u0026#39;: f\u0026#39;{base}resilier/\u0026#39;, \u0026#39;method\u0026#39;: \u0026#39;POST\u0026#39;} if obj.client_id: links[\u0026#39;client\u0026#39;] = {\u0026#39;href\u0026#39;: f\u0026#39;/api/clients/{obj.client_id}/\u0026#39;, \u0026#39;method\u0026#39;: \u0026#39;GET\u0026#39;} return links Every time a contract is serialized, the links reflect its current state. No conditional logic required on the client side.\nRichardson Maturity Model: Where Does Your REST API Stand? Level Description Example 0 Single endpoint, everything via POST SOAP, XML-RPC 1 Distinct resources GET /contrats/1 2 Correct HTTP verbs POST /contrats/, DELETE /contrats/1 3 HATEOAS Responses include _links Most production APIs sit at level 2. Level 3 is what Fielding actually calls \u0026ldquo;REST.\u0026rdquo;\nShould You Always Implement HATEOAS? Honestly, no. It is particularly well suited when:\nThe API is public and consumed by unknown third-party clients The workflow is complex and likely to evolve You want to reduce coupling between client and API For an internal API, levels 1 and 2 are often sufficient. But understanding HATEOAS changes the way you think about API design.\nWorking on Django query optimization? Check out Django in_bulk(): why it beats filter() for bulk lookups.\n","permalink":"https://dev-flow.io/en/posts/api-rest-hateoas/","summary":"\u003cp\u003eTeams often claim \u0026ldquo;we have a REST API in place.\u0026rdquo; But when you look at the actual JSON responses, there are no links anywhere. Just raw data. That\u0026rsquo;s not REST, it\u0026rsquo;s CRUD exposed over HTTP.\u003c/p\u003e\n\u003cp\u003eThe difference comes down to one principle most developers overlook: \u003cstrong\u003eHATEOAS\u003c/strong\u003e.\u003c/p\u003e\n\u003ch2 id=\"what-is-hateoas-in-a-rest-api\"\u003eWhat Is HATEOAS in a REST API?\u003c/h2\u003e\n\u003cp\u003eHATEOAS stands for \u003cstrong\u003eHypermedia As The Engine Of Application State\u003c/strong\u003e. It is one of the fundamental constraints of REST, defined by Roy Fielding in his 2000 dissertation, the same paper that coined the term REST itself.\u003c/p\u003e","title":"HATEOAS: Your REST API Might Just Be CRUD"},{"content":"Django ORM gives you two ways to add a computed value across a set of rows: annotate() with a classic aggregation (Max, Count, Sum\u0026hellip;) or annotate() with a Window function. On the surface they look similar. In practice, they behave in fundamentally different ways — and picking the wrong one can break your entire filtering chain.\nGROUP BY with annotate(): rows that collapse When you combine values() and annotate() with an aggregation, Django generates a GROUP BY in SQL. The result: rows get merged, and you end up with one row per group.\nfrom django.db.models import Max def get_latest_dates(self) -\u0026gt; QuerySet: return self.values(\u0026#39;ctr_id\u0026#39;).annotate( latest_date=Max(\u0026#39;evt_end_effect_date\u0026#39;) ) Generated SQL:\nSELECT ctr_id, MAX(evt_end_effect_date) AS latest_date FROM events GROUP BY ctr_id The result is a dictionary per group — {'ctr_id': 1, 'latest_date': date(2024, 12, 31)} — no longer full model instances, just the aggregated fields.\nWhat you need to understand about chainability: you can still call .filter() or .exclude() afterwards, but the semantics shift completely. Filters apply to the aggregated groups, not to the original rows. You\u0026rsquo;re no longer filtering individual events — you\u0026rsquo;re filtering group results.\n# ⚠️ This filter applies to groups, not the source rows self.values(\u0026#39;ctr_id\u0026#39;).annotate(latest_date=Max(\u0026#39;evt_end_effect_date\u0026#39;)).filter( latest_date__gte=date(2024, 1, 1) ) # SQL: HAVING MAX(evt_end_effect_date) \u0026gt;= \u0026#39;2024-01-01\u0026#39; # No select_related(), no access to other fields on the row Window functions: annotate without touching the rows A Window function computes a value over a partition of rows, but keeps all rows intact. Each row receives its computed value as an additional annotation.\nfrom django.db.models import F, Window from django.db.models.functions import FirstValue def with_latest_dates(self) -\u0026gt; QuerySet: return self.annotate( latest_date=Window( expression=FirstValue(\u0026#39;evt_end_effect_date\u0026#39;), partition_by=[\u0026#39;ctr_id\u0026#39;], order_by=F(\u0026#39;evt_end_effect_date\u0026#39;).desc(), ) ) Generated SQL:\nSELECT *, FIRST_VALUE(evt_end_effect_date) OVER ( PARTITION BY ctr_id ORDER BY evt_end_effect_date DESC ) AS latest_date FROM events All rows are present. Each one now has latest_date — the most recent date within its ctr_id group. And the QuerySet remains a normal QuerySet.\n# ✅ Everything is still possible after a Window annotation qs = self.with_latest_dates() qs.filter(status=\u0026#39;active\u0026#39;) # normal filter (WHERE on non-window column) qs.select_related(\u0026#39;contract\u0026#39;) # normal join qs.exclude(latest_date__isnull=True) # filter on window annotation -\u0026gt; subquery qs.order_by(\u0026#39;ctr_id\u0026#39;, \u0026#39;-latest_date\u0026#39;) Django GROUP BY vs Window Function: a visual comparison GROUP BY Window function ────────────────────────────────── ────────────────────────────────── ctr_id=1, evt=A → row 1 ctr_id=1, evt=A, latest=A → row 1 ctr_id=1, evt=B → ctr_id=1, evt=B, latest=A → row 2 ctr_id=1, evt=C → MAX(C) ──► C ctr_id=1, evt=C, latest=A → row 3 ctr_id=2, evt=D → row 2 ctr_id=2, evt=D, latest=D → row 4 ctr_id=2, evt=E → MAX(E) ──► E ctr_id=2, evt=E, latest=D → row 5 GROUP BY collapses. Window annotates.\nConcrete use case: fetching the most recent row per group Goal: for each contract, retrieve the event with the most recent end-effect date, with access to all its fields.\nWith GROUP BY, this is impossible directly — you lose the row\u0026rsquo;s fields. With Window + RowNumber:\nfrom django.db.models import F, Window from django.db.models.functions import RowNumber def get_latest_event_per_contract(self) -\u0026gt; QuerySet: return ( self.annotate( row_num=Window( expression=RowNumber(), partition_by=[\u0026#39;ctr_id\u0026#39;], order_by=F(\u0026#39;evt_end_effect_date\u0026#39;).desc(), ) ) .filter(row_num=1) .select_related(\u0026#39;contract\u0026#39;) ) RowNumber() numbers the rows within each partition, sorted by descending date. filter(row_num=1) keeps only the first one — the most recent. Django can\u0026rsquo;t add a direct WHERE clause on a Window function (not possible in standard SQL), so it generates a subquery instead: SELECT * FROM (...) \u0026quot;qualify\u0026quot; WHERE \u0026quot;row_num\u0026quot; = 1.\nWindow functions available in Django from django.db.models.functions import ( FirstValue, # first value in the partition LastValue, # last value Lag, # value N rows back: Lag(\u0026#39;field\u0026#39;, offset=1) Lead, # value N rows ahead: Lead(\u0026#39;field\u0026#39;, offset=1) NthValue, # nth value: NthValue(\u0026#39;field\u0026#39;, nth=2) Rank, # rank with ties (1, 1, 3) DenseRank, # rank without gaps (1, 1, 2) RowNumber, # unique row number per partition CumeDist, # cumulative distribution (0.0 → 1.0) PercentRank, # relative rank (0.0 → 1.0) Ntile, # split into N buckets: Ntile(num_buckets=4) ) GROUP BY or Django Window Function: a decision table values().annotate(Max(...)) annotate(Window(...)) SQL GROUP BY OVER (PARTITION BY ...) Rows kept One per group All Field access Only those in values() All Chainability Filter on groups (HAVING) Filter via subquery (WHERE on rows) select_related() ❌ ✅ Use case Count, sum, global max Rank, neighbouring value, per-row max The simple rule: if you need to keep rows intact and keep filtering normally after your computation, use a Window function. If you only want aggregated results (stats, totals, global maxima), values().annotate() is more direct and easier to read.\nWorking on Django ORM optimization? I also wrote about Django in_bulk(): why it beats filter() for bulk lookups.\n","permalink":"https://dev-flow.io/en/posts/django-window-group-by/","summary":"\u003cp\u003eDjango ORM gives you two ways to add a computed value across a set of rows: \u003ccode\u003eannotate()\u003c/code\u003e with a classic aggregation (\u003ccode\u003eMax\u003c/code\u003e, \u003ccode\u003eCount\u003c/code\u003e, \u003ccode\u003eSum\u003c/code\u003e\u0026hellip;) or \u003ccode\u003eannotate()\u003c/code\u003e with a \u003cstrong\u003eWindow function\u003c/strong\u003e. On the surface they look similar. In practice, they behave in fundamentally different ways — and picking the wrong one can break your entire filtering chain.\u003c/p\u003e\n\u003ch2 id=\"group-by-with-annotate-rows-that-collapse\"\u003eGROUP BY with annotate(): rows that collapse\u003c/h2\u003e\n\u003cp\u003eWhen you combine \u003ccode\u003evalues()\u003c/code\u003e and \u003ccode\u003eannotate()\u003c/code\u003e with an aggregation, Django generates a \u003ccode\u003eGROUP BY\u003c/code\u003e in SQL. The result: rows get merged, and you end up with \u003cstrong\u003eone row per group\u003c/strong\u003e.\u003c/p\u003e","title":"Django Window Functions vs GROUP BY: Chainable QuerySets"},{"content":"When you have a list of identifiers and want to retrieve the corresponding instances, the usual reflex in Django is filter(pk__in=[...]). It works — one SQL query. But in_bulk() is an often-overlooked ORM optimization: it returns a dictionary {id: instance} instead of a QuerySet, which fundamentally changes how you access results. Where filter() forces an O(n) traversal to find an object by ID, in_bulk() gives direct O(1) access.\nin_bulk() signature and behavior QuerySet.in_bulk(id_list=(), *, field_name=\u0026#39;pk\u0026#39;) id_list: list of identifiers to retrieve. If omitted (called without arguments), returns all objects in the table. field_name: field used as the dictionary key. Must have unique=True, otherwise Django raises a ValueError. The generated SQL is a simple WHERE pk IN (...) clause — one query regardless of list size.\nin_bulk() vs filter(): O(1) access instead of O(n) # filter() → QuerySet, O(n) access contrats: list[Contract] = list(Contract.objects.filter(pk__in=[1, 2, 3])) contrat: Contract | None = next((c for c in contrats if c.pk == 2), None) # in_bulk() → dict, O(1) access contrats_map: dict[int, Contract] = Contract.objects.in_bulk([1, 2, 3]) # → {1: \u0026lt;Contract pk=1\u0026gt;, 2: \u0026lt;Contract pk=2\u0026gt;, 3: \u0026lt;Contract pk=3\u0026gt;} contrat = contrats_map.get(2) # direct access, None if absent IDs not found in the database simply don\u0026rsquo;t appear in the returned dictionary. No error, no None value: missing key = object doesn\u0026rsquo;t exist.\nin_bulk() with field_name: index by any unique field in_bulk() accepts any unique=True field via field_name:\n# By unique reference refs: list[str] = [\u0026#39;REF-001\u0026#39;, \u0026#39;REF-002\u0026#39;, \u0026#39;REF-003\u0026#39;] contrats_map: dict[str, Contract] = Contract.objects.in_bulk( refs, field_name=\u0026#39;reference\u0026#39; ) # → {\u0026#39;REF-001\u0026#39;: \u0026lt;Contract ...\u0026gt;, \u0026#39;REF-002\u0026#39;: \u0026lt;Contract ...\u0026gt;, ...} contrat: Contract | None = contrats_map.get(\u0026#39;REF-002\u0026#39;) Particularly useful during data synchronizations where the business identifier isn\u0026rsquo;t the PK.\nDjango use cases: when in_bulk() makes the difference Hydrating multiple aggregates in one query In a DDD context, loading multiple aggregates from a list of IDs:\nids: list[int] = [event.contract_id for event in events] contrats_map: dict[int, Contract] = Contract.objects.in_bulk(ids) for event in events: contrat: Contract | None = contrats_map.get(event.contract_id) if contrat: contrat.apply(event) One query for all contracts, then direct ID lookup in the loop.\nAvoiding N+1 during imports from decimal import Decimal def import_rows(csv_rows: list[dict[str, str]]) -\u0026gt; None: references: list[str] = [row[\u0026#39;ref\u0026#39;] for row in csv_rows] existing: dict[str, Product] = Product.objects.in_bulk( references, field_name=\u0026#39;reference\u0026#39; ) to_create: list[Product] = [] to_update: list[Product] = [] for row in csv_rows: if row[\u0026#39;ref\u0026#39;] in existing: product = existing[row[\u0026#39;ref\u0026#39;]] product.price = Decimal(row[\u0026#39;price\u0026#39;]) to_update.append(product) else: to_create.append(Product(reference=row[\u0026#39;ref\u0026#39;], price=row[\u0026#39;price\u0026#39;])) Product.objects.bulk_create(to_create) Product.objects.bulk_update(to_update, [\u0026#39;price\u0026#39;]) Classic import/sync pattern: one in_bulk() query, then bulk_create + bulk_update. Zero N+1.\nFetching all objects from a table # Loads the entire table into memory — reserve for small tables config: dict[int, AppSetting] = AppSetting.objects.in_bulk() value: str = config[42].value Handy for reference tables (countries, currencies, settings) queried frequently.\nOptimizing in_bulk() on large lists with chunking For lists of thousands of IDs, the IN(...) clause can get heavy on the database side. The solution: split into batches.\nfrom collections.abc import Iterator from itertools import islice from typing import Any from django.db.models import QuerySet def chunked(iterable: list[Any], size: int) -\u0026gt; Iterator[list[Any]]: it = iter(iterable) while chunk := list(islice(it, size)): yield chunk def in_bulk_chunked( queryset: QuerySet, ids: list[Any], chunk_size: int = 500, field_name: str = \u0026#39;pk\u0026#39;, ) -\u0026gt; dict[Any, Any]: result: dict[Any, Any] = {} for chunk in chunked(ids, chunk_size): result.update(queryset.in_bulk(chunk, field_name=field_name)) return result # Usage contracts: dict[int, Contract] = in_bulk_chunked( Contract.objects, list_of_5000_ids ) Summary: in_bulk() vs filter() in Django filter(pk__in=[...]) in_bulk([...]) Return QuerySet (list) dict {id: instance} Access by ID O(n) — traversal O(1) — direct key SQL queries 1 1 Missing IDs silently ignored key absent from dict field_name no yes (unique=True required) in_bulk() isn\u0026rsquo;t a universal replacement for filter(). It\u0026rsquo;s a specific tool: when you have IDs and want direct key-based access, it\u0026rsquo;s the right choice. For everything else, filter() remains perfectly suited.\nWorking on Django performance topics? Check out why AI makes learning to code more essential than ever.\n","permalink":"https://dev-flow.io/en/posts/django-in-bulk/","summary":"\u003cp\u003eWhen you have a list of identifiers and want to retrieve the corresponding instances, the usual reflex in Django is \u003ccode\u003efilter(pk__in=[...])\u003c/code\u003e. It works — one SQL query. But \u003ccode\u003ein_bulk()\u003c/code\u003e is an often-overlooked ORM optimization: it returns a \u003cstrong\u003edictionary\u003c/strong\u003e \u003ccode\u003e{id: instance}\u003c/code\u003e instead of a QuerySet, which fundamentally changes how you access results. Where \u003ccode\u003efilter()\u003c/code\u003e forces an O(n) traversal to find an object by ID, \u003ccode\u003ein_bulk()\u003c/code\u003e gives direct O(1) access.\u003c/p\u003e\n\u003ch2 id=\"in_bulk-signature-and-behavior\"\u003ein_bulk() signature and behavior\u003c/h2\u003e\n\u003cdiv class=\"highlight\"\u003e\u003cpre tabindex=\"0\" style=\"color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;\"\u003e\u003ccode class=\"language-python\" data-lang=\"python\"\u003e\u003cspan style=\"display:flex;\"\u003e\u003cspan\u003eQuerySet\u003cspan style=\"color:#f92672\"\u003e.\u003c/span\u003ein_bulk(id_list\u003cspan style=\"color:#f92672\"\u003e=\u003c/span\u003e(), \u003cspan style=\"color:#f92672\"\u003e*\u003c/span\u003e, field_name\u003cspan style=\"color:#f92672\"\u003e=\u003c/span\u003e\u003cspan style=\"color:#e6db74\"\u003e\u0026#39;pk\u0026#39;\u003c/span\u003e)\n\u003c/span\u003e\u003c/span\u003e\u003c/code\u003e\u003c/pre\u003e\u003c/div\u003e\u003cul\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ccode\u003eid_list\u003c/code\u003e\u003c/strong\u003e: list of identifiers to retrieve. If omitted (called without arguments), returns \u003cstrong\u003eall\u003c/strong\u003e objects in the table.\u003c/li\u003e\n\u003cli\u003e\u003cstrong\u003e\u003ccode\u003efield_name\u003c/code\u003e\u003c/strong\u003e: field used as the dictionary key. Must have \u003ccode\u003eunique=True\u003c/code\u003e, otherwise Django raises a \u003ccode\u003eValueError\u003c/code\u003e.\u003c/li\u003e\n\u003c/ul\u003e\n\u003cp\u003eThe generated SQL is a simple \u003ccode\u003eWHERE pk IN (...)\u003c/code\u003e clause — one query regardless of list size.\u003c/p\u003e","title":"Django in_bulk(): why it beats filter() for bulk lookups"},{"content":"We keep hearing the same promise lately: \u0026ldquo;No need to know how to code anymore — AI handles it.\u0026rdquo; And honestly, it\u0026rsquo;s tempting. You open an agent, describe what you want, and within seconds, code appears. Magic.\nExcept not really.\nAgentic development — but for whom? AI-assisted development is a genuine revolution. I\u0026rsquo;m not going to pretend otherwise. For a senior or intermediate developer who has already wrestled with complex problems, debugged twisted algorithms, and shipped systems to production, productivity reaches unprecedented heights. You delegate repetitive tasks, prototype in hours what used to take days, and stay in your high-value zone: architecture, critical decisions, validation.\nBut there\u0026rsquo;s a key word in that sentence: validation.\nBecause agentic development is far from perfect. The agent hallucinates. It generates code that compiles but is fundamentally wrong. It ignores project conventions, bypasses best practices, and quietly introduces security flaws. To truly leverage this tool, you need to be able to guide it, correct it, challenge it.\nAnd to guide a coding agent, you need to know how to code.\nValidating without understanding: a dangerous illusion Imagine putting someone who\u0026rsquo;s never set foot on a construction site in charge of overseeing structural work. They can check if the walls are straight, if the paint looks nice. But the foundations? Seismic standards? Electrical compliance? That\u0026rsquo;ll go right over their head.\nThis is exactly what happens when a developer without hands-on experience tries to validate AI-generated code. They can check that it \u0026ldquo;works\u0026rdquo; on the surface. But readability, maintainability, edge case handling, algorithmic correctness, security risks — all of that will remain invisible to them.\nThe validation will be superficial. And in our field, superficial always blows up in production.\nLearning can\u0026rsquo;t be purely theoretical We know this: learning development requires practice. Writing lines of code. Failing. Debugging for three hours only to find a missing comma. Implementing an algorithm from scratch to understand why time complexity matters. Wrestling with an unexplainable regression until you develop an instinct for likely causes.\nThese are the scars that shape a developer capable of judging, anticipating, deciding.\nBut if AI is always there to \u0026ldquo;rescue\u0026rdquo; the learner from every obstacle, cognitive laziness sets in. Why think when AI answers? Why explore when the solution is one prompt away? You stop confronting the problem. You delegate the thinking. And without realizing it, you never develop the reflexes that make the difference.\nThe question nobody\u0026rsquo;s asking enough yet In five to ten years, a generation of senior developers will retire. The ones who built critical systems, who know battle-tested patterns, who can say \u0026ldquo;I\u0026rsquo;ve seen this go wrong before\u0026rdquo; — they\u0026rsquo;ll be gone.\nWho will replace them?\nDevelopers trained in a world where AI writes the code for them, where hands-on learning was short-circuited by convenience? People who can write good prompts but can\u0026rsquo;t rigorously audit what the agent produced?\nToday, critical production code is still validated by competent humans. But that competence isn\u0026rsquo;t hereditary. It\u0026rsquo;s built, painfully, through experience.\nWhat if AI became perfect tomorrow? Maybe. Progress is real and accelerating. It\u0026rsquo;s possible that one day agents will be capable of qualitative self-validation — checking on their own that the code they produce follows best practices, is secure, performant, and maintainable.\nBut in my experience, even today with the most advanced models, if you want clean code, you have to guide it. Give it context. Impose constraints. Correct its course. And to do all that, you need vision. Expertise. Judgment.\nAI is an extraordinary tool. But like any tool, its effectiveness depends entirely on the hand that wields it.\nConclusion: learning to code has never been more important Paradoxically, the rise of AI in development makes learning to code more essential, not less. Not to write every line yourself — that\u0026rsquo;s a vision of the past — but to keep the ability to understand, evaluate, and direct what machines produce.\nThe new developers who invest in this difficult, practical learning will be the ones who get the most out of AI. The others will at best be surface-level operators: competent when things go smoothly, lost the moment something breaks.\nCode is still learned. And it\u0026rsquo;s learned by living it.\nNew to DevFlow? Find out why this blog talks about Python, Django and FastAPI.\n","permalink":"https://dev-flow.io/en/posts/ai-learning-code/","summary":"\u003cp\u003eWe keep hearing the same promise lately: \u003cem\u003e\u0026ldquo;No need to know how to code anymore — AI handles it.\u0026rdquo;\u003c/em\u003e And honestly, it\u0026rsquo;s tempting. You open an agent, describe what you want, and within seconds, code appears. Magic.\u003c/p\u003e\n\u003cp\u003eExcept not really.\u003c/p\u003e\n\u003ch2 id=\"agentic-development--but-for-whom\"\u003eAgentic development — but for whom?\u003c/h2\u003e\n\u003cp\u003eAI-assisted development is a genuine revolution. I\u0026rsquo;m not going to pretend otherwise. For a senior or intermediate developer who has already wrestled with complex problems, debugged twisted algorithms, and shipped systems to production, productivity reaches unprecedented heights. You delegate repetitive tasks, prototype in hours what used to take days, and stay in your high-value zone: architecture, critical decisions, validation.\u003c/p\u003e","title":"AI doesn't replace learning to code"},{"content":"This blog is first and foremost a place to share: discoveries, thoughts, things that have been useful to me and might be useful to others.\nPython, Django, FastAPI and DRF: the heart of this blog The heart of the blog is Python development, and more specifically the frameworks that shape my daily work: Django, FastAPI and Flask. Each has its strengths, its use cases, its pitfalls. We\u0026rsquo;ll dig into all of them.\nBut beyond code that works, what I care about is code that lasts. So we\u0026rsquo;ll also cover methods and practices:\nTDD: writing tests before code, and why it genuinely changes the way you think SOLID: the principles behind maintainable code DDD: modelling the business, not just the database And then there\u0026rsquo;s everything you accumulate with experience: the patterns you adopt, the ones you drop, the mistakes you stop making, the shortcuts you learn to avoid.\nGo, Lua, JavaScript and other languages From time to time, we\u0026rsquo;ll step outside the boundaries. Go for what it brings in terms of performance and simplicity in certain contexts. Lua for its unexpected use cases. Other languages when the occasion arises, not for the sake of completeness, but out of curiosity.\nWelcome to DevFlow.\nInterested in development in the age of AI? Read my article on why AI makes learning to code more essential than ever.\n","permalink":"https://dev-flow.io/en/posts/why-this-blog/","summary":"\u003cp\u003eThis blog is first and foremost a place to share: discoveries, thoughts, things that have been useful to me and might be useful to others.\u003c/p\u003e\n\u003ch2 id=\"python-django-fastapi-and-drf-the-heart-of-this-blog\"\u003ePython, Django, FastAPI and DRF: the heart of this blog\u003c/h2\u003e\n\u003cp\u003eThe heart of the blog is \u003cstrong\u003ePython\u003c/strong\u003e development, and more specifically the frameworks that shape my daily work: \u003cstrong\u003eDjango\u003c/strong\u003e, \u003cstrong\u003eFastAPI\u003c/strong\u003e and \u003cstrong\u003eFlask\u003c/strong\u003e. Each has its strengths, its use cases, its pitfalls. We\u0026rsquo;ll dig into all of them.\u003c/p\u003e","title":"Why a blog about Python, Django and FastAPI?"}]