briefcase for documents placed on table
AWS Azure Kubernetes Linux PowerShell Python Scripts Terraform Windows

Coding for Beginners or Advanced – A Standard set files for your projects

This might not apply to your projects but as a DevOps Engineer I think it’s good to have a standard set of files/folders for each of my projects. It keeps things tidy and helps me. This list is based on my projects using Terraform, GitLab, Docker, Packer, git and more.

  • .env
  • .terraformrc
  • .versionrc
  • .yamllint
  • .gitignore
  • .pre-commit-config.yaml
  • .terraform-docs.yaml
  • mkdocs.yml
  • docker-compose.yml
  • .gitlab (folder)
  • docs folder
  • /docs/assets/img folder

Here’s a brief explanation of each item on your list:

  1. .env: This file is typically used to store environment variables. These variables can be different for each environment, development, preproduction, production and testing, ensuring that sensitive data such as passwords, API keys, and database configurations are kept separate from the codebase. A good use case is with Docker. example below
  2. .terraformrc: This configuration file is specific to Terraform, an infrastructure as code tool. It contains settings that modify Terraform’s behaviour, such as API tokens, plugin configurations, or the address of a Terraform Enterprise server. example below
  3. .versionrc: This file is used to configure automatic versioning and changelog generation, often managed by tools like standard-version. It helps in defining how version numbers are bumped and how changelog entries are formatted. example below
  4. .yamllint: This is a configuration file for yamllint, a linter for YAML files. It ensures that all YAML files adhere to a specified format, improving consistency and preventing common errors like incorrect indentation or duplicated keys. example below
  5. .gitignore: A file used by Git to determine which files and directories to ignore when making commits. This helps prevent unwanted files (like build outputs, temporary files, or sensitive information) from being included in a repository. example below
  6. .pre-commit-config.yaml: Configuration file for pre-commit, a framework for managing and maintaining multi-language pre-commit hooks. It specifies which hooks to run before each commit, such as code linters or syntax formatters, to ensure code quality. example below
  7. .terraform-docs.yaml: Configuration file for terraform-docs, a tool that generates documentation from Terraform modules in various output formats. This helps maintain up-to-date documentation that is consistent with the actual code. example below
  8. mkdocs.yml: This file configures MkDocs, a static site generator that’s geared towards building project documentation. Documentation source files are written in Markdown, and configured with this file. example below
  9. docker-compose.yml: A YAML file used with Docker Compose, a tool for defining and running multi-container Docker applications. It allows you to configure application services, networks, and volumes in a single file. example below
  10. .gitlab (folder): Typically, this directory would contain configuration files specific to GitLab CI/CD and other GitLab-related configurations. It’s used to customize GitLab’s behaviour or integration with other services. example below
  11. CODEOWNERS: A file utilized by GitHub (and other platforms like GitLab) to define individuals or teams responsible for code in a specific directory or file. It is used primarily for automatically assigning reviewers to pull requests. example below
  12. docs (folder): Generally, this folder contains project documentation files, often written in Markdown or similar lightweight markup language. It’s used to organize and store documentation separately from the code. example below
  13. /docs/assets/img (folder): This subfolder within the docs directory typically holds images used in the documentation. Storing them in a separate img folder helps keep the documentation organized. Used with for example mkdocs. example below
  14. (root): Located at the root of a project, this Markdown file is typically the first piece of documentation a user sees. It usually contains an overview of the project, installation instructions, usage examples, and licensing information. example below

Each of these files and folders plays a crucial role in organizing, documenting, and managing a project, particularly in collaborative and automated environments.

Examples of use

You have an .env file containing

Your docker-compose.yml might reference these variables to configure services:

The .env file allows you to keep your credentials out of the docker-compose.yml and easily change them without modifying your Docker setup or committing sensitive data to your version control system.

.terraformrc example

The .terraformrc file is particularly useful in scenarios where you need to customize Terraform’s behaviour beyond default settings. This configuration file provides an opportunity to specify various configurations that influence how Terraform operates. Here are some practical uses of the .terraformrc file:

You might be in an environment where internet access is restricted, and downloading plugins directly from the internet during each Terraform run isn’t feasible. To handle this, you can configure the .terraformrc file to point to a local directory where plugins are pre-downloaded or mirrored:

You can use the .terraformrc file for redirecting Terraform to use a mirror of Terraform providers or modules, you need to specify the configuration that points Terraform’s plugin and module installation processes to your mirrored sources. This is especially useful in environments where access to the public Terraform Registry is restricted or where network efficiency is a concern.

  • Provider Installation: This section configures Terraform to use a network mirror for all providers. The network_mirror block specifies the URL of the mirror ( The include directive is used to specify which providers should be fetched from the mirror (*/* indicates all providers). The direct block with exclude = ["*/*"] ensures that all providers are fetched from the mirror and none from the direct internet or public registries.
  • Module Installation: This section tells Terraform how to handle modules. It redirects requests for modules from the official Terraform Registry ( to a path on a custom mirror (/ This setup ensures that any module requests to the official registry are rerouted to the specified mirror.

.versionrc example

The .versionrc file is a configuration file used with version management tools like standard-version, which automates the versioning and changelog generation process based on semantic versioning (semver) and commit messages. This file is particularly valuable in projects that implement automated version control and release processes, especially in collaborative software development environments.

You have a Node.js project, and you want to ensure that every time a new release is made, the version number is updated in package.json and a comprehensive changelog is automatically generated based on the commit messages. This ensures that the project adheres to semantic versioning principles and provides a clear history of changes for other developers and users.


  1. Install standard-version: First, you would install standard-version as a development dependency in your project:

Create a .versionrc file: You then create a .versionrc file in the root of your project to customize the behaviour of standard-version. Here’s an example .versionrc configuration:

This configuration defines how different types of commits are categorized in the changelog. For instance, features and fixes are included under their respective sections, while chores and styles are hidden from the changelog.

Automate the release process: To automate the release process, you can add a script to your package.json:

  1. Running npm run release will then automatically:
    • Update the version in package.json based on your commits (e.g., fix: commits lead to a patch version bump, feat: commits lead to a minor version bump, and breaking changes lead to a major version bump).
    • Generate or update the with a summary of changes grouped by type (Features, Bug Fixes, etc.), according to the rules specified in .versionrc.
    • Create a new commit with the updated package.json and
    • Tag the commit with the new version number.

Using the .versionrc file in this way ensures that the versioning and changelog generation process is not only automated but also customizable to fit the specific needs of your project. It helps maintain consistency, transparency, and a clear communication channel regarding changes made in the project. This approach is particularly useful in projects with multiple contributors where tracking changes manually would be cumbersome and error-prone.

.yamllint example

The .yamllint file is a configuration file used with yamllint, a linter for YAML files. Yamllint checks YAML files for formatting issues, syntax errors, and other potential problems, ensuring that your YAML files are not only syntactically correct but also adhere to best practices and style guidelines. This is particularly important in configurations where YAML is extensively used, such as in software development projects involving Docker, Kubernetes, or any configuration management system that utilizes YAML files.

In a software development project, a team is using Kubernetes to manage their containerized applications. Kubernetes configuration files, which are written in YAML, need to be consistent and error-free to avoid deployment issues and to ensure that configurations are easy to understand and maintain by all team members.

  1. Install yamllint:
    First, you need to install yamllint. It can be installed via pip:
  1. Create a .yamllint configuration file:
    Place a .yamllint file in the root directory of your project where your Kubernetes YAML files are stored. This file will define the rules for linting the YAML files. Here’s an example configuration that checks for common issues:

This configuration sets up yamllint to:

  • Warn if lines exceed 80 characters.
  • Ensure indentation uses 2 spaces and is consistent within sequences.
  • Enable checks for trailing spaces at the end of lines.
  • Allow documents to start without a document start marker.
  • Check for duplicate keys within a YAML document.
  • Check that keys are in alphabetical order within a map.

Another configuration could look like this

  1. Integrate yamllint with CI/CD Pipeline:
    Integrate yamllint into your continuous integration/continuous deployment (CI/CD) pipeline. Configure the pipeline to run yamllint automatically on all Kubernetes configuration files whenever changes are committed. This can typically be done in the test stage of your pipeline script:

Using .yamllint in this context ensures that all Kubernetes configuration files are consistent and free from common syntactical and stylistic errors. It improves the reliability of deployments and eases maintenance and collaboration within the team. By catching issues early in the development lifecycle, the team can avoid deployment failures and reduce troubleshooting time, leading to a more efficient development process.

.gitignore example

The .gitignore file is crucial for any project that uses Git for version control. This file tells Git which files or directories to ignore and not track. It helps prevent unnecessary or sensitive files from being included in a repository, which is important for both the efficiency of repository operations and security.

If you are developing an application using a typical web stack that includes frontend assets, backend code, and various configuration files. Your project also generates temporary files or logs that should not be uploaded to the Git repository.

  1. Create a .gitignore file: At the root of your project directory, create a .gitignore file. This file will specify all the files and directories that Git should ignore.
  2. Specify files and directories to ignore: Here’s an example of what the .gitignore file might contain for a web application project:
  1. This setup will ensure that:
    • All files ending with .log are not tracked.
    • The node_modules directory where npm packages are installed is not tracked.
    • The dist directory, which typically contains build outputs, is not tracked.
    • Environment files (.env) containing sensitive keys are ignored.
    • IDE settings directories like .vscode and .idea are ignored to avoid personal configuration conflicts between team members.
    • Any temporary files stored in tmp/ are not included in the repository.
  2. Check the .gitignore effectiveness: After setting up .gitignore, you can test its effectiveness by running git status. Files listed in .gitignore should not appear as untracked in the output of this command.
  3. Commit the .gitignore file: It’s a best practice to commit the .gitignore file to your repository. This ensures that all collaborators on the project adhere to the same rules for ignored files, providing consistency across development environments.

Using a .gitignore file in this way:

  • Keeps the repository clean and free from unnecessary files, which can reduce the size of the repository and improve the speed of Git operations.
  • Helps protect sensitive information, such as credentials in .env files, from being exposed publicly.
  • Reduces clutter in commit history, making it easier for developers to track and understand changes.

.pre-commit-config.yaml example

The .pre-commit-config.yaml file is used to configure pre-commit hooks with the pre-commit framework. Pre-commit hooks are tools that run checks on your codebase automatically before you commit your changes to the version control system. This ensures that all commits meet the required standards for code quality, styling, and other policies, preventing problematic code from entering the repository.

Say you are part of a development team working on a Python-based project. You want to ensure that every commit conforms to coding style guidelines and does not introduce any security vulnerabilities.

  1. Install the pre-commit package:
    If not already installed, you need to install the pre-commit framework. You can do this using pip:
  1. Create .pre-commit-config.yaml:
    In the root directory of your project, create a .pre-commit-config.yaml file to define the hooks that will be run before each commit. Here’s an example configuration:

This configuration includes:

  • General hooks from the pre-commit-hooks repository for trimming trailing whitespace, checking YAML files, fixing the end of files, and preventing large files from being committed.
  • Black for code formatting, ensuring all Python code adheres to the same style.
  • Flake8 for linting, catching coding errors and style issues.
  • Bandit for identifying common security issues in Python code.
  • The next blocks run from the local machine
    • packer-fmt code formatting for Hashicorp Packer
    • terraform-fmt code formatting for Hashicorp Terraform
    • yamllint code formatting for YAML files
  1. Install the hooks:
    Run the following command to install the hooks into your Git hooks directory:
  1. Commit code:
    When you attempt to commit changes, the pre-commit hooks will automatically run. If any hook flags an issue, the commit will be blocked until the issue is resolved.

Using .pre-commit-config.yaml in this way ensures that:

  • All code commits are automatically checked for style consistency, syntax errors, and potential security issues before being added to the repository.
  • The quality and security of the codebase are maintained, reducing the risk of bugs and vulnerabilities.
  • Development is streamlined as developers receive immediate feedback on their commits, helping them learn and adhere to project standards quickly.

This setup is particularly effective in collaborative environments where multiple developers contribute to the same codebase, helping maintain code quality and consistency throughout the project lifecycle.

.terraform-docs.yaml example

The terraform-docs tool is designed to automatically generate documentation from Terraform modules. It extracts information from Terraform configurations to produce human-readable documents that describe variables, outputs, providers, and resources, among other elements. This automation helps in maintaining consistent and up-to-date documentation as your infrastructure evolves.

Imagine you are managing a complex infrastructure with Terraform that consists of multiple modules. Each module is responsible for a different aspect of your infrastructure, such as networking, security, and compute resources. You want to ensure that each module has up-to-date documentation that can be easily understood by new team members and other stakeholders.


  1. Install terraform-docs:
    First, install terraform-docs. If you’re using a macOS with Homebrew, you can install it using:

Alternatively, for other operating systems, you can find the installation instructions in the terraform-docs GitHub repository.

  1. Prepare Your Terraform Modules:
    Make sure each Terraform module has a well-defined,, and This helps terraform-docs extract all the necessary information.
  2. Create a Configuration File (terraform-docs.yml):
    Although you can run terraform-docs directly with CLI arguments, using a configuration file (terraform-docs.yml or .terraform-docs.yml) allows you to maintain consistency in documentation across all modules. Here’s an example configuration:
  1. Generate Documentation:
    Navigate to each module’s directory and run terraform-docs using the configuration file:

This command will generate a file in each module directory, containing tables and descriptions of inputs, outputs, providers, and other resources defined in the module.

  1. Automate Documentation Updates:
    Integrate terraform-docs generation into your CI/CD pipeline to ensure documentation is automatically updated whenever Terraform configurations change. This could be done as a script in your pipeline configuration:

Using terraform-docs in this way provides several benefits:

  • Consistency: All modules have a consistent format and level of detail in their documentation.
  • Efficiency: Reduces the manual effort required to keep documentation in sync with actual code changes.
  • Accuracy: Ensures that the documentation accurately reflects the current state of the Terraform code, which reduces errors and improves implementation quality.

In projects that use Terraform extensively, keeping documentation up-to-date can be challenging. terraform-docs addresses this challenge by automating the generation of module documentation, making it easier to manage large-scale or rapidly evolving infrastructure configurations.

mkdocs.yml example

The mkdocs.yml file is the configuration file for MkDocs, a static site generator that’s geared towards creating project documentation. MkDocs uses Markdown files as the source, allowing you to quickly generate a clean and responsive documentation website. The mkdocs.yml file defines how the site is built, including its structure, themes, plugins, and other settings.

You are developing a software project and want to provide comprehensive user documentation that is easy to navigate and looks professional. The project is growing, and the documentation needs to be easily updatable and maintainable by different team members.

  1. Install MkDocs:
    You first need to install MkDocs. It’s a Python package, so you can install it using pip:
  1. Set-up mkdocs.yml:
    Create an mkdocs.yml file at the root of your documentation directory. This file will contain all the necessary configurations for your documentation site. Here’s an example:
  1. Write Documentation:
    Create Markdown files according to the structure defined in mkdocs.yml under the nav section. Each Markdown file represents a page in the documentation website.
  2. Build and Preview the Documentation:
    Use MkDocs commands to build and preview the site locally before publishing:

This command will start a local server, and you can view your site in a web browser, it will automatically rebuilds the site upon changes to any of the Markdown files, allowing you to preview changes in real time.

  1. Deploy the Documentation:
    Once satisfied with the local preview, deploy the site to a server:

Or you can deploy to Github:

  • Consistency: Provides a consistent look and navigation structure that enhances the user experience.
  • Collaboration: Allows multiple contributors to work on the documentation simultaneously.
  • Scalability: Easily scalable as adding new pages or sections is as simple as updating the mkdocs.yml file and adding new Markdown files.
  • Version Control: Integrates well with version control systems, making it easier to track changes and roll back if necessary.

Using MkDocs with a properly configured mkdocs.yml file offers a robust solution for managing and maintaining project documentation, making it an ideal choice for projects that require clear, structured, and professional-looking documentation.

docker-compose.yml example

In a scenario where you’re managing a development environment with Docker Compose that includes MkDocs for documentation, Terraform for infrastructure as code, and Packer for creating machine and container images, Docker Compose can be used to define and run multi-container Docker applications, ensuring that each tool is contained within its own environment. This setup allows for consistent development environments across team members and simplifies dependency management.

Here’s an example docker-compose.yml file that configures services for MkDocs, Terraform, and Packer:

Docker-compose file breakdown

  1. MkDocs Service:
  • Maps the local docs directory to the /docs directory inside the container, allowing live editing of documentation files.
  1. Terraform Service:
  • Sets the working directory to /work.
  • By default, only runs terraform so you can add on to it. for example docker-compose run terraform plan or docker-compose run terraform apply
  1. Packer Service:
  • Can use a local or the official hashicorp/packer Docker image.
  • Maps a local packer directory to /packer inside the container.
  • Sets the working directory to /packer.
  • Executes a script which runs packer with different options. It could be set to run packer build etc..


  • Isolation: Each tool runs in its own container, ensuring dependencies and environments don’t conflict.
  • Reproducibility: The environment is defined in code, making it easy to replicate across different machines.
  • Ease of Use: Running a single command sets up the entire environment needed for development.

.gitlab (folder) example

The .gitlab folder in a project repository typically contains configuration files specifically for use with GitLab, particularly related to GitLab CI/CD (Continuous Integration/Continuous Deployment). This folder can include scripts, CI pipeline definitions, and other resources that are crucial for automating build, test, and deployment workflows in a GitLab environment.

Say you are managing a software project that involves multiple programming languages and technologies, including a backend written in Python, a frontend in JavaScript, and perhaps some microservices in Go. You want to set up automated pipelines that handle building, testing, and deploying these components across different environments (development, staging, production).


  1. Structure of the .gitlab Folder:
    You might organize the .gitlab folder to include subfolders and scripts tailored to specific parts of your CI/CD process:
  1. CI Pipeline Configuration (gitlab-ci.yml):
    Create a gitlab-ci.yml file at the root of your repository (not in the .gitlab folder) that uses the scripts and templates defined within the .gitlab folder. This file defines the CI/CD pipeline stages and includes jobs for building, testing, and deploying:
  1. Script Execution:
    Ensure that each script (e.g.,,, is executable and properly written to perform its intended tasks:
  • Testing scripts might set up environments, install dependencies, and run unit tests.
  • Linting scripts might execute code quality checks.
  • Deployment scripts could handle the deployment of built artifacts to various environments, using Docker, Kubernetes, or other deployment technologies.
  1. Running the Pipeline:
    Commit and push changes to your GitLab repository. GitLab CI/CD will automatically pick up the gitlab-ci.yml configuration and run the defined jobs according to the triggers set (like on push, merge requests, or manual triggers).


  • Organization: Keeping CI/CD scripts and configurations in a .gitlab folder helps in maintaining a clean and organized repository structure.
  • Modularity: By separating scripts and CI templates into distinct files and folders, you can easily reuse and manage CI components across different projects or parts of the same project.
  • Scalability: This structure supports scaling up the CI/CD process as the project grows in complexity or as new technologies are integrated.


The CODEOWNERS file is a tool used in version control systems like GitLab and GitHub to define individuals or teams that are responsible for specific parts of a repository. When used within a GitLab context, the CODEOWNERS file can help automate the assignment of merge request reviewers and ensure that the right people are aware of changes in the codebase they own. This feature can enhance the security and quality of the code by involving the most knowledgeable contributors in the review process.

If you are working on a large software development project that involves multiple teams, each specializing in different areas of the project. To ensure that code reviews are conducted by the most knowledgeable team members and to protect critical parts of the codebase, you decide to implement a CODEOWNERS file.

  1. Create the CODEOWNERS File:
    Within the .gitlab directory in your GitLab repository, create a CODEOWNERS file. This file will list the paths within the repository and the corresponding owners (either GitLab users or groups).
  2. Define Ownership Rules:
    Specify which files and directories each team is responsible for. Here’s an example of what the CODEOWNERS file might look like:
  1. Integrate with Merge Requests:
    With the CODEOWNERS file in place, GitLab can use it to automatically suggest or require reviews from the listed users or groups when a merge request affects files they own. This integration can be customized in your project’s settings.
  2. Educate the Team:
    Make sure that everyone in the project understands the purpose of the CODEOWNERS file and how it affects the merge request process. It’s crucial for team members to be aware that they might be automatically assigned to review changes to certain parts of the codebase.
  3. Maintain the CODEOWNERS File:
    As the project evolves, regularly update the CODEOWNERS file to reflect changes in team responsibilities or project structure. This maintenance is crucial to keep the review process relevant and effective.


  • Automated Reviewer Assignment: Automatically assign merge requests to the right experts, improving the quality of code reviews.
  • Enhanced Security: Protect critical parts of your code by ensuring that only authorized and knowledgeable team members can approve changes.
  • Clear Responsibilities: Clarify who is responsible for maintaining specific parts of the codebase, which can be especially helpful in large projects or teams.

Using a CODEOWNERS file in a GitLab project helps streamline the code review process by automating the assignment of merge request reviewers based on expertise and responsibility. This ensures that all changes are scrutinized by the most qualified individuals, enhancing both the security and the quality of the software development project.

docs folder example

The docs folder in a code project is traditionally used to store all documentation related to the project. This can include user manuals, API documentation, architectural overviews, setup guides, and more. Having a dedicated docs folder is essential for organizing all written resources in a way that is accessible and maintainable, especially as the project grows.

  1. Structure of the docs Folder:
    Organize the docs folder with a logical structure that caters to different types of documentation needs:
  1. Create and Maintain Documentation:
  • Getting Started: Documents like and help new users set up the project and begin using it with minimal setup.
  • User Guide: Detailed descriptions of features, how to use them, and specific workflows, such as and
  • Developer Guide: Information for contributors, including and, ensuring that contributions are consistent and in line with project standards.
  • Architecture: An overview of the system architecture, components, and interaction between services, which is crucial for both users and contributors.
  • FAQs: A collection of frequently asked questions to help users troubleshoot common problems without needing direct support.
  1. Integrate Documentation Tools:
    Use tools like MkDocs to generate static site documentation from your markdown files or source code comments, making it easy to host and share:
  • Configure a tool like MkDocs with a mkdocs.yml file at the root that points to the docs folder.
  • Build the documentation site and deploy it to GitHub Pages, GitLab Pages, or another hosting service.
  1. Link Documentation in the Project:
    Ensure there is a clear link to the documentation in the main file at the root of the project. This file should have a section that directs users and contributors to the docs folder for more detailed information.
  2. Update and Review Documentation Regularly:
    Make documentation updates a part of your project’s workflow. Review documentation as part of the code review process to ensure that any changes in the code are reflected in the documentation.

/docs/assets/img folder example

The /docs/assets/img folder in a documentation directory is typically used to store image files that are used within the documentation itself. This organizational strategy keeps image assets neatly organized and easily manageable, particularly in large documentation projects where images play a critical role in enhancing readability and understanding.

Organize the /docs/assets/img Folder: Structure the img folder to categorize different types of images for easy management and reference:

Reference Images in Documentation: In your Markdown files within the docs folder, reference these images using relative paths. For example, if you are describing the login process in your user guide, you might include a screenshot like so: example

The file is an essential component of nearly every software project, acting as the first point of contact for anyone who encounters the repository. It typically provides a comprehensive overview of the project, including what it does, how to set it up, how to use it, and where to find more detailed documentation. The README can significantly influence the initial impression and usability of a project, making it crucial for both open-source and private projects.

  1. Start with a clear, concise description of what the application does and its purpose. This section should capture the essence of the project and its unique value proposition.
  1. List the key features of the application to give readers an idea of what the application can do.
  1. Provide step-by-step instructions on how to set up the project locally. This might include prerequisites, required software, and any environment setup needed.

4. Explain how to use the application or how to run commands for different tasks.

5. Encourage contributions by explaining how others can contribute to the project. Link to the file if you have one.

6. Specify the license under which the project is released, so users know how they can legally use the project.

7. Provide a way for users to contact the project team. This could be an email address, a link to a project website, or social media profiles.


  • Clarity and Accessibility: A comprehensive ensures that anyone who comes across the project quickly understands its purpose, setup, and use.
  • Encourages Collaboration: Detailed contribution guidelines help foster a community around the project.
  • Reduces Setup Time: Clear installation instructions minimize setup errors and reduce onboarding time for new contributors and users.

There are more files, maybe if this is popular I will add more. But for now that it!