Serviced Apartments – A Comfort Oasis for Long Corporate Stays

Serviced Apartments – A Comfort Oasis for Long Corporate Stays

With India becoming a successful business hub, corporate travels are in the heyday of its times. It is an inevitable source of business expansion and establishment. With its surge, business accommodation industry has also evolved and is in line with the needs of the residents. When corporate stays are extended, the most preferred option of stay is Serviced Apartment.

Reports say that 88% of the companies use service apartments for Extended Business Stays. You may ask why one should choose Serviced Apartments over Hotels. Hotels are luxurious with all the comforting amenities. They are the perfect abode for a few days stay. But when it comes to tariffs, hotels become pricy for long stays as compared to serviced apartments. With hefty charges on stay, it’s sure going to break your bank. Along with tariff, it’s also the food that you consume every day. If your business stays spans for months, then the accommodation should not be just a place to hit the bed, but comfortable space to live and sustain in another country. This is where Serviced Apartments come to the rescue. Serviced apartments have become the comfort oasis for long-term corporate travellers giving them home comfort as well as corporate sophistication even in the spacestarved cities.

What Are Service Apartments?

Serviced Apartments are the typically extended accommodation spaces which are meant to give home-like facilities. It is like the extended version of a hotel but with more rooms, kitchen, Wi-Fi amounting to give the ultimate ‘home away from home’ experience. Most serviced apartments are occupied by corporate whereas it is also open to the general public who are on a family vacation or any other purpose of travel.

Benefits:

  • These apartments give the same novelty as a hotel with room service, full-equipped kitchens, Wi-Fi, in apartment dryers and washers, living, sleeping and working area with access to restaurants and gyms too
  • They are more spacious, comfortable and provide privacy than hotels.
  • You can cook your own meal whenever you want, served with all the needed things.
  • Serviced apartments are commonly located at the heart of the city. So, it would be a short commute to your intended place.
  • It is the ideal solution for a cost-effective long stay as with washing machine is provided so one does not need to spend on expensive laundry charges. Also, food can be ordered from nearby take away joints, if you do now want to cook, hence saving on exorbitant room service charges.
  • It gives the comfort of staying in your usual routines even when you are away from home.

With all that brings benefits, there are some setbacks always attached to it.

The main challenge regarding the serviced apartment is to find the befitting apartment for your requirements and financial limits in that particular location. It demands a lot of searches. Another thing is relevance and updation of availability. Even though you can find the details about the apartment, the availability of specific facilities like room service, laundry and food cannot be known accurately.

These challenges can be solved with the help of travel and accommodation experts. They have the granular knowledge about the availability of certain facilities and they can easily filter out the service apartment which would meet your requirements and budget.

Hotel n Apartment is one such connoisseur in the Travel and Accommodation industry. With our dint of network intelligence, we can list you the premium to moderate levels of serviced apartments which would be at its best in its own level. We can list you the luxurious service apartments at reasonable rates. If service apartments are cheap, Hotel n Apartment suggested serviced apartments are relatively cheaper without compromising the on quality of stay.

With 2800+ accommodation inventory across 90+ countries like USA, UAE, Australia, India, etc, get a curated list of serviced apartment in various cities of the country. Hotel n Apartment has made happy corporate customers by making their stays sophisticated in all possible ways. Let us have the pleasure of doing it for you too. Contact us for your next Corporate long stay!

Post Discussion

10,421 thoughts on “Serviced Apartments – A Comfort Oasis for Long Corporate Stays

  1. We help you stay in comfort while you take care of your business
    Baltimorejala

    [b]Vsts azure devops – Кабринский Эдуард

    Vsts azure devops
    [youtube]
    Vsts azure devops The news Vsts azure devops
    Continuous Delivery with TFS, VSTS, and Azure DevOps

    In this article we tackle all of the common challenges encountered when trying to do continuous integration with Azure DevOps aka VSTS aka TFS. Some critical topics that are tackled here are:

    How to work with Azure DevOps environment variables
    How to create a build pipeline

    By the way, if you aren’t aware, TFS, VSTS and Azure DevOps are all technically the same solution. Over several years, Microsoft did a lot of rebranding that created the confusion between all of these products. Hence it went from TFS to VSTS to Azure DevOps relatively quickly. Causing lots of confusion. The latest version is called Azure DevOps. I hope that it stays with this name for the foreseeable future ??
    Automation pipeline in Azure DevOps with .NET Framework
    In this section, we will cover automated testing pipelines in Azure Pipelines.
    Working Azure DevOps pipeline with automated tests in .yml
    Staged test execution in Azure DevOps
    What if you want to run different suites in your Azure pipeline so that you have a gated release. For example, you should be trying to run unit tests first, followed by integration tests, followed by acceptance tests. In that case, you need the ability to filter.
    By default, Azure DevOps will attempt to run all of your tests in your solution. Regardless of whether you are using NUnit or MsTest or something weird like XUnit (although I never tried with this and wouldn’t)
    When you add a VS Test task to your pipeline, this default configuration will run all your tests
    YAML file to run tests
    Important points about YAML file
    I couldn’t get searchFolder to filter out directories
    However, using testAssemblyVer2 worked really well
    Filtering tests using Azure DevOps
    In order to be able to have a staged test execution in our continuous integration pipeline, we need to be able to filter our tests. I have 100s of tests in this repo but I would only like to run the tests that have [Category(“BestPractices”)] from Nunit.
    Furthermore, I would like to exclude all of the performance tests, even though they are also categorized as “BestPractices”. As a result, the configuration of our Azure DevOps task looks like this.

    And here is what the same exact thing looks like in a .yml
    UI Automation in Sauce Labs
    UI Automation CI pipeline using YAML with Sauce Labs
    How to set Sauce Labs environment variables in Azure DevOps with .NET Core?
    These are instructions if you are working with .NET Core
    1. Create environment variables in Azure DevOps and assign them secret values
    Here’s an example

    My recommendation is that you name the variables to something similar that I have above. Do not name your ADO variables the same as your Environment variables as that will cause you issues when you are trying to read them. So don’t name your ADO variable SAUCE_USER_NAME for example
    2. Pass values from environment variables to variables in the source code
    For this to work, you need to be reading values into environment variables exactly like this:
    Make sure that you do NOT set the EnvironmentVariableTarget.User as your 2nd parameter as Azure DevOps will not be able to read that variable.
    3. Configure .yml to read values from Azure pipelines and set them into system environment variables
    Your YAML needs this piece of code to be able to set environment variables of the Azure DevOps box
    Extra resources
    An excellent blog about environment variables in ADO
    Set Sauce Labs environment variables in Azure DevOps with .NET Framework?
    Using task.setvariable
    One way to set environment variables in Azure DevOps is to ##vso[task.setvariable
    In the .yml, specify code that looks like this
    The last 2 lines are the important ones. They state that we should set environment variable SAUCE_USERNAME with the value of $($env:SAUCE_USER) . SAUCE_USER is a variable that was defined in Azure DevOps.
    Using a Powershell script
    This is a more complicated approach than above.

    First you need to create some environment variables in your Azure DevOps UI that you want to use for values. This is an example of a variable that I would like to set on the test agent for my automation scripts. For example sauce.userName. I will use the value of this variable(sauce.userName) and have a Powershell script set it in my System Environment Variables of the test agent when my automation is running. That way, the value of this variable isn’t exposed to the public.

    2. Next, you will want to create a Powershell script that you attach to your solution. Here’s my solution layout.
    Powershell script placed at the root of the solution
    Don’t forget to make sure to copy your Powershell script to output directory on build in your .csproj
    Here is what you want in your Powershell script.
    3. Create a Powershell task in your Pipeline
    The Powershell script will read the values that are passed in from the Azure DevOps and then run those values through SetEnvironmentVariable() in the .ps1

    When this task runs, I’m having it output the values to the Azure DevOps logs so that I can make sure the right values are being read. Here’s an example
    ADO variables being read in the Posh script
    How to read environment variables in Azure DevOps?
    If you are reading an environment variable in your code like this
    Make sure that you do NOT set the EnvironmentVariableTarget.User as your 2nd parameter as Azure DevOps will not be able to read that variable.
    How to create environment variables to share across pipelines
    If you would like to reuse environment variables across multiple pipelines, then you need to use variable groups. It’s important to know that if use “group”, then you now need to list your other variables in a key/value pairs.
    How to package Nuget using Azure DevOps?
    Phenomenal resource that will walk you through the process.
    This is a resource that helps you to understand how to work with Nuget in .NET Standard.
    Set up a Service connection to Nuget in your project settings

    You do need to have an MSDN account and then you can create a Nuget.org account.
    How to publish the Nuget package

    Download Nuget.exe
    Add a nuget.config file
    Run command nuget restore
    Run command .\nuget.exe push -Source “Simple.Sauce” -ApiKey az “C:\Source\SauceLabs\simple_sauce\dotnet\Simple.Sauce\bin\Debug\Simple.Sauce.0.1.1-debug.nupkg”

    Why DevOps?

    value of dev-ops
    Adding continuous integration

    Select Builds > Edit > Triggers. Under Continuous integration, select on the name of your repository.
    Toggle on the checkbox labeled Enable continuous integration.
    You can select CI for both merges and Pull Requests – https://prnt.sc/kb0g6n

    How to create scheduled builds

    Picking up from the section above, click the Add button under Scheduled section
    Update the time and the day of when you want to run your builds
    Save & queue

    How to set up your test .dll paths
    The hardest time I had was how to configure my paths for my automation .DLL files. It’s not intuitive to know what is the working directory of your code base.
    Here are some examples of what’s worked for me:
    Executable Paths
    “C:\Source\Github\dot-net-sauce\SauceExamples\Web.Tests\bin\Debug\Web.Tests.dll” ‘**\$(BuildConfiguration)\**\Web.Tests.dll’ “C:\Source\Github\dot-net-sauce\SauceExamples\SeleniumNunit\bin\Debug\SeleniumNunit.dll” ‘**\$(BuildConfiguration)\**\SeleniumNunit.dll’ Column A shows the local path. Column B shows how the path looks in Azure DevOps.
    How to pass parameters to test code from a build or release pipeline?
    A: Use a runsettings file to pass values as parameters to your test code. For example, in a release that contains several stages, you can pass the appropriate app URL to each the test tasks in each one. The runsettings file and matching parameters must be specified in the Visual Studio Test task.
    How to add a status badge to your Github repo
    The instructions on this are really good from Microsoft and you can follow them here in the Get the status badge section
    Finally, you want to have a powershell step in your YAML that executes this Powershell script and passes in the values from the variables that you set in the Azure DevOps UI.
    Below is what my YAML step looks like and I’m basically setting the SAUCE_USERNAME and SAUCE_ACCCESSKEY variables.
    Vsts azure devops

    Vsts azure devops
    [youtube]
    Vsts azure devops News page Vsts azure devops
    Vsts azure devops
    What is VSTS, TFS, Azure DevOps and how do you work with all of these solutions to do continuous testing and continuous delivery.
    Vsts azure devops
    Vsts azure devops Vsts azure devops Vsts azure devops
    SOURCE: Vsts azure devops Vsts azure devops Vsts azure devops
    #tags#[replace: -,-Vsts azure devops] Vsts azure devops#tags#
    [url=http://remmont.com]new[/url]

    3 years ago Reply

  2. We help you stay in comfort while you take care of your business
    Elginjala

    [b]Kabrinskiy Eduard – Tfs development – Eduard Kabrinskiy

    Tfs development
    [youtube]
    Tfs development Latest news update Tfs development
    Continuous Integration and Continuous Deployment using TFS – Part 1
    Introduction
    Continuous integration (CI) and continuous deployment (CD) help in reliably deliver quality apps to the customers at a faster rate. From code through build, test, and deployment is defined efficiently and fully managed pipelines that automate and control the entire process.
    With Continuous Deployment, every change that is made is automatically deployed to production/dev/staging environment. This approach works well in enterprise environments where we plan to use the user as the actual tester and it can be quicker to release.
    This blog post provides an idea about setting up the CI and CD pipeline for any project in Team Foundation Server and run the automated builds and deployment from that.
    Continuous integration (CI) is a development practice that requires developers to integrate code into a shared repository several times a day. Each check-in is verified by automated build, allowing teams to detect problems early.

    Continuous Delivery is the extension of CI: an approach in which each team ensure that every change to the system is releasable, and we can release any version at the push of a button.

    Continuous Deployment (CD) is similar to Continuous Delivery. Here, once the developer commits the code it will go to production without any manual/human intervention.

    CI/CD Pipeline – It is the backbone and one of most important part of DevOps. Here we create smooth and seamless pipeline/workflow right from Coding till deployed to production.

    Team Foundation Server aka TFS is a Microsoft product which is serves as a SCM tool like Git. It also has features like reporting, project management, automated builds, testing and release management capabilities.It covers entire lifecycle, and enables DevOps capabilities. TFS can be used with numerous IDE including Visual Studio and Eclipse on all platforms.
    It also provides the features of implementing both CI and CD. It has the feature of Build Management which includes build process management and automating build triggering. It supports good number of automated build triggers such as scheduled build, Continuous Integration trigger, etc.
    Problem Statement
    When the development is done and the testing team wants to check the functionality then build is released to the different test environments and this activity is handled manually for most of the projects. Manual build deployments takes lot of time and requires user interactions. Also manual build deployment have challenges such as handling the build version number, cost of the development is more because of maintenance, requires more team engagement and organization.
    Solution
    To overcome all the challenges and dependencies of manual build deployment process the Automated Build deployment is the best solution and the Build automation is the process of automating the creation of a software build and the associated processes including: compiling computer source code into binary code, packaging binary code, and running automated tests.
    Now let us see how to set up the CI-CD pipeline using TFS.
    Prerequisite for set up CI & CD using TFS

    Admin Permissions for the TFS
    Set up build agent for running the builds from the TFS build definition.

    Set up Admin Permissions for TFS
    When configuring a user in the Team Foundation Server administrator role, you must set permissions in Team Foundation Server groups.
    1.From the team page on TFS, click the to go to the team administration page.
    2. Add an administrator.

    Set up the Build Agent
    For the TFS navigate to the Agent Queues section and select the option to download agent.

    Get the Agent based on the system pre-requisite.

    Create and configure the Agent as specified in the above screenshot.
    After downloading and creating the agent contents will be displayed in the folder structure like below:

    Run the Agent in the Admin mode. For running the builds and release the agent should be running always.

    Set up the Continuous Integration using TFS
    Create a build definition
    Login to Team Foundation Server as Admin and Create a Build Definition.
    Go to Build & Release tab and navigate to Builds section.
    Click on +New to create a new build definition.

    Choose template for the new build definition
    For the TFS there are multiple options present for creating a build definition such as Maven, Jenkins, Xamarian.Android etc. Based on your application select the template.
    Select the Visual Studio template and for build and run tests from visual studio. For Visual Studio template Build Agent is required.

    Select the repository settings for the build definition
    Select the repository source and the default agent queue for running the build definition.
    For the Build definition multiple options are present for build, variables, triggers, repository and History.
    These options help the user to maintain the build definition.

    Multiple build steps can be added to the build template selected.
    Click at Build steps and a task catalogue is opened and from this select the options for Build, Utility, Test, Package and Deploy based on the application requirements.

    Configure the variables for the build definition.
    Variables section for the build contains the list of predefined variables and for this section more variables can be added.

    Variables are accessed for the build as $(variableName)
    See the screenshot below for reference:

    Create the versioning for the build
    For tracking the build, versioning is important and TFS has the option for creating a Build number format.
    For the Build Definition, navigate to the General section and provide the Build Number Format based on the project need.

    The build number after this specific definition is 1.0.0.30, 1.0.0.31 and so on.

    Agent Queue
    Agent Queues manages the list of agents configured and the request capabilities for the build agent.
    Build Summary
    Click on the Build definition created and a complete summary of the build definition can be viewed.
    Summary contains the recently completed builds details, edit options for the build definition, queue the new build option and history for the build definition.
    Set up the test assemblies for running the tests
    Add the test assembly for test execution with the builds.

    Build definition execution
    Whenever a code check-in is done, the build starts and all the steps are executed in specified order.

    Navigate to the Tests Tab for the TFS and go to the Runs section, here the complete Automated tests execution results can be viewed. All the details for the build platform, environment can be viewed here and the .trx file for the current build execution can be downloaded from this section and this .trx file contains the complete execution details of the tests triggered along with the build with the Pass/Fail status , failure reason.
    In the next post we will discuss about how to Set up the Continuous Deployment using TFS.
    Tfs development

    Tfs development
    [youtube]
    Tfs development National news headlines Tfs development
    Tfs development
    Introduction Continuous integration (CI) and continuous deployment (CD) help in reliably deliver quality apps to the customers at a faster rate. From code through build, test, and deployment is defined
    Tfs development
    Tfs development Tfs development Tfs development
    SOURCE: Tfs development Tfs development Tfs development
    #tags#[replace: -,-Tfs development] Tfs development#tags#
    Kabrinskiy Eduard
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

  3. We help you stay in comfort while you take care of your business
    Miramarjala

    [b]Kabrinskiy Eduard – Gitlab azure devops – Кабринский Эдуард

    Gitlab azure devops
    [youtube]
    Gitlab azure devops Latest national news Gitlab azure devops
    Azure DevOps
    Plan smarter, collaborate better, and ship faster with a set of modern dev services.
    Already have an account?

    Azure Boards
    Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

    Azure Pipelines
    Build, test, and deploy with CI/CD that works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

    Azure Repos
    Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

    Azure Test Plans
    Test and ship with confidence using manual and exploratory testing tools.

    Azure Artifacts
    Create, host, and share packages with your team, and add artifacts to your CI/CD pipelines with a single click.
    Extensions Marketplace
    Access extensions from Slack to SonarCloud to 1,000 other apps and services—built by the community.
    Use all the DevOps services or choose just what you need to complement your existing workflows

    Azure Boards
    Agile planning tools
    Track work with configurable Kanban boards, interactive backlogs, and powerful planning tools. Unparalleled traceability and reporting make Boards the perfect home for all your ideas—big and small.

    Azure Pipelines
    CI/CD for any platform
    Build, test, and deploy in any language, to any cloud—or on-premises. Run in parallel on Linux, macOS, and Windows, and deploy containers to individual hosts or Kubernetes.

    Azure Repos
    Unlimited free private repos
    Get flexible, powerful Git hosting with effective code reviews and unlimited free repositories for all your ideas—from a one-person project to the world’s largest repository.

    Azure Test Plans
    Manual and exploratory testing
    Test often and release with confidence. Improve your overall code quality with manual and exploratory testing tools for your apps.

    Azure Artifacts
    Universal package repository
    Share Maven, npm, NuGet, and Python packages from public and private sources with your entire team. Integrate package sharing into your CI/CD pipelines in a way that’s simple and scalable.

    Extensions Marketplace
    Access 1,000+ extensions or create your own.
    See how customers are using Azure DevOps

    Chevron accelerates its move to the cloud, sharpens competitive edge with SAFeВ® built on Azure DevOps.

    Pioneering insurance model automatically pays travelers for delayed flights.

    Digital transformation in DevOps is a “game-changer”.

    Axonize uses Azure to build and support a flexible, easy-to-deploy IoT platform.

    Cargill builds a more fertile and secure platform for innovation in the public cloud.

    Learn how to scale DevOps practices throughout your organization
    Read Enterprise DevOps Report 2020-2021 to learn how top-performing organizations have implemented DevOps across their businesses.

    Optimize remote developer team productivity with these 7 tips
    Find out how to empower your distributed development team with the right tools, skills, and culture for remote development.
    Azure DevOps
    Choose Azure DevOps for enterprise-grade reliability, including a 99.9 percent SLA and 24?7 support. Get new features every three weeks.
    On-Premises
    Manage your own secure, on-premises environment with Azure DevOps Server. Get source code management, automated builds, requirements management, reporting, and more.
    See how teams across Microsoft adopted a DevOps culture
    Get started with Azure DevOps
    Easily set up automated pipelines to build, test, and deploy your code to any platform.
    Gitlab azure devops

    Gitlab azure devops
    [youtube]
    Gitlab azure devops America news today Gitlab azure devops
    Gitlab azure devops
    Plan smarter, collaborate better, and ship faster with Azure DevOps Services, formerly known as Visual Studio Team Services. Get agile tools, CI/CD, and more.
    Gitlab azure devops
    Gitlab azure devops Gitlab azure devops Gitlab azure devops
    SOURCE: Gitlab azure devops Gitlab azure devops Gitlab azure devops
    #tags#[replace: -,-Gitlab azure devops] Gitlab azure devops#tags#
    Eduard Kabrinskiy
    [url=http://remmont.com]breaking news today[/url]

    3 years ago Reply

  4. We help you stay in comfort while you take care of your business
    RENTjala

    [b]Эдуард Кабринский – Devops online – Эдуард Кабринский

    Devops online
    [youtube]
    Devops online World breaking news Devops online
    Azure DevOps
    Plan smarter, collaborate better, and ship faster with a set of modern dev services.
    Already have an account?

    Azure Boards
    Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

    Azure Pipelines
    Build, test, and deploy with CI/CD that works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

    Azure Repos
    Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

    Azure Test Plans
    Test and ship with confidence using manual and exploratory testing tools.

    Azure Artifacts
    Create, host, and share packages with your team, and add artifacts to your CI/CD pipelines with a single click.
    Extensions Marketplace
    Access extensions from Slack to SonarCloud to 1,000 other apps and services—built by the community.
    Use all the DevOps services or choose just what you need to complement your existing workflows

    Azure Boards
    Agile planning tools
    Track work with configurable Kanban boards, interactive backlogs, and powerful planning tools. Unparalleled traceability and reporting make Boards the perfect home for all your ideas—big and small.

    Azure Pipelines
    CI/CD for any platform
    Build, test, and deploy in any language, to any cloud—or on-premises. Run in parallel on Linux, macOS, and Windows, and deploy containers to individual hosts or Kubernetes.

    Azure Repos
    Unlimited free private repos
    Get flexible, powerful Git hosting with effective code reviews and unlimited free repositories for all your ideas—from a one-person project to the world’s largest repository.

    Azure Test Plans
    Manual and exploratory testing
    Test often and release with confidence. Improve your overall code quality with manual and exploratory testing tools for your apps.

    Azure Artifacts
    Universal package repository
    Share Maven, npm, NuGet, and Python packages from public and private sources with your entire team. Integrate package sharing into your CI/CD pipelines in a way that’s simple and scalable.

    Extensions Marketplace
    Access 1,000+ extensions or create your own.
    See how customers are using Azure DevOps

    Chevron accelerates its move to the cloud, sharpens competitive edge with SAFeВ® built on Azure DevOps.

    Pioneering insurance model automatically pays travelers for delayed flights.

    Digital transformation in DevOps is a “game-changer”.

    Axonize uses Azure to build and support a flexible, easy-to-deploy IoT platform.

    Cargill builds a more fertile and secure platform for innovation in the public cloud.

    Learn how to scale DevOps practices throughout your organization
    Read Enterprise DevOps Report 2020-2021 to learn how top-performing organizations have implemented DevOps across their businesses.

    Optimize remote developer team productivity with these 7 tips
    Find out how to empower your distributed development team with the right tools, skills, and culture for remote development.
    Azure DevOps
    Choose Azure DevOps for enterprise-grade reliability, including a 99.9 percent SLA and 24?7 support. Get new features every three weeks.
    On-Premises
    Manage your own secure, on-premises environment with Azure DevOps Server. Get source code management, automated builds, requirements management, reporting, and more.
    See how teams across Microsoft adopted a DevOps culture
    Get started with Azure DevOps
    Easily set up automated pipelines to build, test, and deploy your code to any platform.
    Devops online

    Devops online
    [youtube]
    Devops online Daily news Devops online
    Devops online
    Plan smarter, collaborate better, and ship faster with Azure DevOps Services, formerly known as Visual Studio Team Services. Get agile tools, CI/CD, and more.
    Devops online
    Devops online Devops online Devops online
    SOURCE: Devops online Devops online Devops online
    #tags#[replace: -,-Devops online] Devops online#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]local news[/url]

    3 years ago Reply

  5. We help you stay in comfort while you take care of your business
    Lowelljala

    [b]Кабринский Эдуард – Vsts cherry pick – Эдуард Кабринский

    Vsts cherry pick
    [youtube]
    Vsts cherry pick Top stories today Vsts cherry pick
    GIT Cherry-pick

    Introduction to GIT Cherry-pick
    In this article, we’re going to learn about the GIT Cherry-pick in detail. There are many programmers working on the same software development from different corners of the world. Then how to manage the codes? How they will make others understand what changes they have made? How to commit the codes and maintain different versions? How to merge the codes?
    To solve these problems, GIT came into the Development world. GIT is an outstanding Source Code Management [SCM] and Distributed Version Control System. GIT was created by Linux Torvald, the person who developed the Linux kernel. Obviously, it is an open-source tool where every programmer can contribute to building a piece of software from anywhere in the world.
    Hadoop, Data Science, Statistics & others
    GIT has many features. It may have multiple branches. A developer can write codes after creating his own branch in the local system and merge it with the master branch or other branches of the remote GIT repository.
    What is GIT Cherry-pick?
    Imagine, project work is going on to write a script on the history and evolution of cell phones. So, there are many people working on the same project and all are working separately. However, in the end, everyone’s script will be compiled together.
    Now, member A is writing about Apple phones and suddenly realizes it can be better. So, he informed the matter to the other team members who are working on the same project. Another member X told him he is writing a script on Android phones and asked member A to have a look.
    Then member A looked into the teammate’s script and found that some of the portions are the same with some changes which are really good. Therefore, he cherry-picked those changes and pasted in his own script. This is the same what cherry-pick called in GIT in the matter of software coding industry.

    Git-cherry-pick is a powerful git command and cherry-picking is a process to pick up a commit from a branch and apply it to some other branch. In simple words, there can be multiple branches where developers commit their codes. Now, one developer supposed to commit his codes in branch A, however, he committed the codes in branch B by mistake. That time cherry-pick may switch back the commit to the correct branch.
    Use below command [in Unix system] to know different options for git-cherry-pick,
    Command:
    The syntax for cherry-pick command,
    Syntax:
    git cherry-pick [–edit] [-n] [-m parent-number] [-x]
    When we Use GIT Cherry-pick?
    Git-cherry-pick is a useful tool however not a best practice for all the time. git-cherry-pick can be used in the following scenarios,

    To make it correct when a commit made in a different branch accidentally.
    Preferable traditional merges
    To apply the changes in an existing commit.
    Duplicate commit
    Bug fixing

    How GIT Cherry-pick works?
    A defect found in code in the Production environment and a fix needs to be implemented. Once the change implemented and a defect is fixed, now it is time to bring this code change back in the Development environment so that the defect will not occur again and again in the Production environment in the future.
    The first option is a simple git merge and it is an ideal solution if it works. However, there are other changes done in the Production environment also and those can not be brought back to the Development environment while merging. And in that case, cherry-pick is the correct option.
    Cherry-pick brings that commit which was made for bug fix only. It does not pick the other commits.
    Here is an illustration,
    Fig 1: G and H are the Production branch commits. A to F Development branch commits. A problem is found in the Production branch. A fix is developed in H commit which needs to apply in the Development branch but commit G is not required to be applied.

    Fig 2: Now commit H is cherry-picked on the Development branch and resulting commit is H’. Commit G changes are not included in the Development branch.

    How to Use GIT Cherry-pick with Example?
    Suppose we have two branches [master and new_feature] [command used to see the branch-git branch]

    We made the commit [Highlighted] in the new_feature branch by mistake. [command used to see the committed logs-git log]
    However, it supposed to be in the master branch only. First copy the highlighted SHA in a notepad.

    Now we will use a git-cherry-pick command to move this commit to master branch however before that we need to switch to master branch [command used to switch the branch-git checkout ]

    [command used git–cherry–pick ] [same SHA should be paste which was copied to notepad earlier with git cherry-pick command]

    Now we can see the same commit is available in master branch [command used-git log]
    For more example of git cherry-pick commands, please see the below link,
    Important Things to Remember
    Three things need to remember while using cherry-pick and working in a team.
    1. standardize commit message: It is better to use standardize commit message and -x if we cherry-pick from a public branch.
    git cherry-pick -x
    It will avoid merging conflict in the future.
    2. copy over the notes: Sometimes some of cherry-pick have notes and when we run cherry-pick, the notes do not get copied. Therefore, it’s better to use it.
    3. Cherry-pick multiple commits when they are linear only: We want to cherry-pick multiple commits like G, H [Fig 1], if they are linear, then only use the below command,
    git cherry-pick G^..H
    Conclusion
    Suppose we want to pick up a specific commit from a different branch and apply to the current branch, here are steps recommended,
    1. Find the commit hash that needs to be cherry-picked first.
    2. Go to the destination branch.
    3. git cherry-pick -x
    Resolve the conflicts if they happen. If there are notes in the original commit, those need to be copied.
    Recommended Articles
    This is a guide to GIT Cherry-pick. Here we discuss its working and how to use git cherry-pick with examples in detail. You may also look at the following articles to learn more –
    Vsts cherry pick

    Vsts cherry pick
    [youtube]
    Vsts cherry pick Current news today Vsts cherry pick
    Vsts cherry pick
    Guide to GIT Cherry-pick. Here we discuss the introduction and when we use GIT Cherry-pick also how to use it with example and important things to remember.
    Vsts cherry pick
    Vsts cherry pick Vsts cherry pick Vsts cherry pick
    SOURCE: Vsts cherry pick Vsts cherry pick Vsts cherry pick
    #tags#[replace: -,-Vsts cherry pick] Vsts cherry pick#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]breaking news today[/url]

    3 years ago Reply

  6. We help you stay in comfort while you take care of your business
    CapeCoraljala

    [b]Эдуард Кабринский – Tfs continuous integration – Kabrinskiy Eduard

    Tfs continuous integration
    [youtube]
    Tfs continuous integration Today’s news stories Tfs continuous integration
    Basic Database Continuous Integration and Delivery (CI/CD) using Visual Studio Team Services (VSTS)
    Problem
    You work as a lead SQL Server database developer for a company that provides digital services to the clients through their database system. You have been asked to implement basic database continuous integration and delivery (CI/CD) to ease the pressure on your development team and improve performance by automating build and deployment processes and complying with development best practices. In this tip we will look at how this can be done using Visual Studio Team Services.
    Solution
    The solution is to put your database code in an in-house or cloud-based source control system such as VSTS (Visual Studio Team Services) and configure database continuous integration and delivery pipelines for smooth database development and deployment cycles.
    This tip is focused on cloud-based source control system (Visual Studio Team Services) for database continuous integration and delivery.
    Basic Concepts of Database CI/CD
    Let’s first go through some of the basic concepts of database continuous integration and delivery.
    Database Continuous Integration (CI)
    Database Continuous Integration (CI) is a process of automating builds which means a build is triggered every time you or your development team checks database code into source control.
    Since the source code is your single source of truth, database continuous integration encourages your development team to frequently put your work under source control which means there are no long shelvesets left in isolation and all developers work can be joined together using database continuous integration.
    Database Continuous Delivery (CD)
    Database Continuous Delivery means automating database deployments and replacing manual interventions (as much as possible) when the database is prepared to be delivered to any target environment (Dev, Test, QA, etc.) but ultimately to Production.
    Smooth Database Continuous Integration (CI) further leads to a smoother Database Continuous Delivery (CD).
    Local Build(s) vs. Debugging
    Run a local build in VSTS by pressing Ctrl+Shift+B which compiles the database project successfully or fails in case of errors.
    SQL Server Database Projects on a successful build deploys database code to the local debug database which is done by debugging the project (pressing F5).
    Therefore, debugging in the context of a SQL Database Project does the following things:

    Complies (builds) the database objects (code) defined in database project
    Deploys the database project to the debug database

    Artifacts
    Artifacts are the deployment ready files or packages that can be picked up by a Release Manager to deploy to target environments.
    DACPAC
    DACPAC is a single file (package) which contains all database objects definition ready to be transformed into a database.
    In this tip we are going to publish a DACPAC as an Artifact.
    Environments
    Multiple environments can be setup when automating or semi-automating deployments. The environment can be Test, Pre-Production, Production, QA and even Shared Dev.
    Database CI/CD Dependency on Source Control
    Please remember that database continuous integration and delivery cannot take place unless you check your database code into source control.
    Database CI/CD Setup
    Prerequisites
    This tip assumes that you already have a Visual Studio Team Services (VSTS) account. If you do not have one, it is very easy to create a free VSTS account using your Microsoft email address.
    Please note that you have to comply with the license terms when using Visual Studio Team Services account and similar services by Microsoft or any other vendor.
    VSTS Project Creation
    Let’s begin by logging into the Visual Studio Team Services (VSTS) account and creating a new Project “Digital Services Database CI and CD” choosing Git version control and Scrum work item process (you can choose any other work item process as well) as shown below:

    Open Visual Studio Project from VSTS
    Next click on the “Code” tab in Visual Studio Team Services and then click on the “Clone in Visual Studio” option to open the project in Visual Studio (you must have Visual Studio 2013+):

    Creating Local Repo (Git repository)
    Pressing the “Clone in Visual Studio” button is going to open Visual Studio (provided you have Visual Studio 2013+) and you are required to create a local repository by clicking on “Clone” as shown below:

    Creating a Database Project
    Next you will be asked to create a new project or solution.

    Create a new project “DigitalServices” under “DigitalServicesCICDSolution” (solution) as follows:

    Creating Database Objects from Script
    Let’s create database objects to replicate the scenario where you have built these objects in one go. The reason I have mentioned one go, is because we have not put the database objects under source control. The starting point in a Database CI/CD is to build and check your code into source control (TFS/Git).
    Please use the digital services (sample) database script as shown below:
    Next import the script into the Project after downloading it to a folder:

    After a few more steps click the “Finish” button and see your project getting populated with database objects from the script:

    Please configure the database project using the settings mentioned below:
    Target Platform (under Project Settings):
    Set the Target Platform as desired. In our case we have set it to SQL Server 2012 it can be any upward version as well such as SQL Server 2014 or SQL Server 2016.
    Debug Database:
    Also, set the debug database to point to your local SQL instance on a dev machine (if you prefer to do so).

    Always Recreate Database Deployment Option:
    Please set the “Always recreate database” deployment option to true under Project Settings > Debug.
    Debug Database Project to Kick-Off Local Build
    Press F5 to debug the project which will kick off the BUILD and then deploy the code to the debug database using local dev settings:

    Putting Project under Git Source Control
    The next step is to check your code into source control by pressing CTLR+ALT+F7 keys anywhere in the Solution Explorer and then adding a comment “Basic database objects ready” and then clicking on the “Commit All and Push” button as shown below:

    Now padlock icons appear next to the project items (stored procedures, tables, etc.) showing they have been checked into source control:

    Database CI/CD Implementation
    Now that the project is under source control it meets the minimum requirements to implement database continuous integration and delivery (CI/CD).
    Creating New Build Definition in VSTS
    The first step is to define a new build definition in Visual Studio Team Services to automate the Build Process.
    Navigate to the Builds Tab in VSTS and click on “New Definition” as follows:

    Next, keep the default settings as they are and click on “Continue” as shown in the figure below:

    Then in the next step start with an empty template:

    Select “hosted” agent queue from the drop-down box as shown below:

    Then add a new task “MSBuild” by clicking the “+” Symbol next to Phase 1 as follows:

    Next, locate the Solution file by clicking on the ellipses next to the Project text box as follows:

    Then point it to the solution file as follows:

    Add MSBUILD arguments “/t:build /p:CmdLineInMemoryStorage=True” as shown below:

    Next, add another build task “Copy Files” as shown in the figure below:

    Let us now point to the DACPAC file which is by default produced by a successful build bin/Debug folder under the project folder:

    Then add another Task “Publish Build Artifacts” as follows:

    Please add the following settings in this task:

    Enabling Database Continuous Integration
    Next navigate to the “Triggers” tab under VSTS Build and Release and check “Enable continuous integration” and then click on “Save and queue” option as follows:

    Next add the following and then click on “Save & queue”:

    Queuing and Running Build
    Next click on build number and you will see the build getting queued and then up and running in a few seconds:

    Test Running Database CI/CD
    Now create a view “Clients” in the database project to see all the clients using the following code:
    Put the database code changes under source control as shown below:

    Instantly go to the Builds in VSTS as follows:

    We can see the build has been triggered automatically as soon as the code was put under source control.
    Now after a few seconds, check the status of the build:

    Artifacts Check
    Finally check the Artifacts to see the DACPAC:

    So your Database Continuous Integration (CI) is working and your DACPAC is ready for further Continuous Deployment or Delivery.
    A simple scenario for Continuous Delivery (CD) can be automating the task of pushing the Artifact (DACPAC) to the target environment (Test, QA, Pre-Production or Production) to be deployed; however, the detailed steps are beyond the scope of this tip.
    Next Steps

    After going through the process, download the published Artifact (DACPAC) and run locally to see if you can create a database from it.
    Please go through my previous Tip and see if you can replicate developing a multi-customer database using database continuous integration and delivery.
    Please have a look at my previous Tip to add salt unit tests to your project and see if you can add those tests in VSTS Build
    Try creating the whole project mentioned in my previous Tip using test-driven development (TDD).
    Learn more about SQL Server Data Tools from the MSDN Library.
    Please explore further Database CI/CD by visiting the VSTS home page.

    Last Updated: 2018-02-22

    About the author

    Haroon Ashraf’s interests are Database-Centric Architectures and his expertise includes development, testing, implementation and migration along with Database Life Cycle Management (DLM).
    Comments For This Article
    Tuesday, May 26, 2020 – 9:25:15 AM – Haroon Ashraf Back To Top (85767)
    I am pleased that this tip helped you in your task.
    As far as creating a change script is concerned you do not need to worry about that part because this is to be managed by the Build when compiling the SQL Database Project and the DACPAC knows what to do whether to create a brand new database and its related objects or to modify the existing objects based on what changes you made to the project.
    However, if there is no database at all it is going to created by the DACPAC when deployed to the target site (database) and if the database is already there then it is going to be updated based on the changes you have made.
    Thursday, May 21, 2020 – 8:12:25 AM – Vivek Back To Top (85713)
    this tip helped me to set up the Build, but i have a small hiccup, i would like to generate only database change script instead of Create Script which has all objects, can you please let me know how i can achieve this. the change script has only the objects which were changed or newly added.
    Thursday, May 07, 2020 – 9:17:24 AM – Haroon Ashraf Back To Top (85595)
    Thank you for your feedback, Radhika.
    I am glad to know that the tip was helfpul to you.
    It is possible that I may write about continuous delivery or something about release pipelines in the same (easy to understand) pattern in which this tip was written.
    Thursday, May 07, 2020 – 7:14:36 AM – Radhika Back To Top (85593)
    The tip was amazing..it really helped..thanks a lot..kindly post a tip for continous delivery as well
    Thursday, April 05, 2018 – 3:16:32 AM – Haroon Ashraf Back To Top (75614)
    I would like to share that the article about adding reference data in database continuous integration pipeline is available now.
    Please click the link below to see the article:
    Friday, March 02, 2018 – 5:43:04 PM – Haroon Ashraf Back To Top (75339)
    Adding reference data in database continuous integration and delivery workflow is a very good point, which requires a more solid approach to get reference data in sync with all the environments (sandboxed, dev, test and production) first.
    Please stay in touch with MSSQLTips as more tips of similar type are on their way.
    Friday, March 02, 2018 – 3:03:55 PM – Haroon Ashraf Back To Top (75338)
    Thank you, John, for your kind remarks.
    Yes, it would be really nice to get familiar with your database CI/CD workflows which look quite promising as I said earlier.
    Meanwhile, please stay in touch with MSSQLTips as if I write more about database continuous integration and delivery your comments can add more to it.
    Thank you once again for your valued comments.
    Thursday, March 01, 2018 – 9:55:50 PM – John Shkolnik Back To Top (75333)
    A great sentiment with which I agree 100%, Haroon.
    I’d be happy to speak with you live about how we’re doing CI/CD if you’re ever interested.
    Thursday, March 01, 2018 – 11:17:18 AM – rr Back To Top (75328)
    Can you please add how to include static/reference data with the CI/CD mentioned in this article.
    Wednesday, February 28, 2018 – 6:52:11 PM – Haroon Ashraf Back To Top (75324)
    Thank you very much for your detailed comments.
    Let us not move any further to prove which of the two unit-testing frameworks is best, why not discuss some common points which others may find helpful.
    Let us refer to the same video by Eric about database unit-testing with Slacker in which has given a beautiful guideline for a good unit-testing framework which is as follows:

    Easy to setup and run
    Flexible to mock or generate test data
    Easy to develop test logic
    Easy to debug test code or code issues
    Seamless integration with CI&CD Pipeline

    Slacker aleady complies with all the above mentioned things but this is also true for tSQLt unit-tests.
    tSQLt does not require actual database to be changed to support the test if we are working with SSDT (SQL Server Data Tools) and I assume same is true for Slacker.
    Database Project in SSDT serves as source of truth, apart from the benefits it offers, when there are multiple environments and multiple versions of the same database floating around but it is no way a limitation.
    Connected development mode in SSDT does not require any model at all, rather we work directly on database objects depending on the requirements.
    Thank you once again for sharing your CI/CD workflows which look quite promising. 🙂
    Tuesday, February 27, 2018 – 9:59:14 PM – John ShkolnikBack To Top(75309)
    This is true. Then again, it’s quite easy to pick up Ruby syntax.
    SQL isn’t really conducive to writing this sort of logic. Does tSQLt have an equivalent to the reusable erb templates so the SQL called by unit tests can be assembled dynamically? Like this: https://github.com/vassilvk/slacker/blob/master/lib/slacker_new/project/sql/common/sproc.sql.erb
    The csv’s are an option for readability and reusability; you can do it all inline if you want. Same is true for the functions. That’s like saying SQL scalar functions are a drawback.
    I don’t know what you mean by this because, unless I misunderstood, Slacker does that as well.
    This is true but it doesn’t seem like it’d take that much to add. A better question is how the concept of a unit test can be applied to something as large as a cross-database scenario?
    tSQLt is more popular, yes, but it’s been around longer and something was better than nothing. That doesn’t make it better.

    Correct me if wrong but doesn’t tSQLt require the actual database being tested to be changed in order to support the tests? This seems like a fundamental no-no. For example, setting a database to trustworthy can easily allow something to work which would otherwise fail.
    Our database projects are not cumulative from inception. We can recreate our databases as of any point in time as if they were “born” like that (with all system data) rather than having to deploy a model and upgrade it to current. Our CI builds deploy a temporary database instance as of that point in time, run Slacker tests (and publish the results to the test tab via adapter), and then drop the database. For our release pipelines, we also layer in functional tests by seeding in data and seeing how the instances perform both as new deployments as well as upgraded from previous point in time.
    Eric Kang posted a video last year demonstrating Slacker: https://channel9.msdn.com/Shows/Visual-Studio-Toolbox/SQL-Server-Database-Unit-Testing-in-your-DevOps-pipeline
    Sunday, February 25, 2018 – 2:13:28 PM – Haroon Ashraf Back To Top (75295)
    Thank you for sharing your valued comments.
    Yes, Slacker is another competitive database unit-testing framework as you mentioned.
    We have the following benefits when tSQLt is used for unit-testing framework:
    (1) With tSQLt no context switching is required (your developers don’t need to switch to any other language from SQL) as in case of Slacker which is RSpec based testing tool written in Ruby
    (2) tSQLt unit tests inheritly support test-driven development in such a way that your development team writes SQL object code as test in the same way they write object code in first place in the absence of test driven development
    (3) Slacker tests depends on function calls to mock data which is further stored as separate csv files while tSQLt unit tests are self contained and donot need to refer to any external reference data rather reference data is stored in the same test
    (4) Isolating dependencies for an object under test in tSQLt has an edge over other database unit testing framework
    (6) tSQLt supports cross database unit testing
    (5) tSQLt is the underlying unit testing framework behind many famous commercial database testing tools.
    However, the choice is yours and it depends on your requirements and preferences.
    With Regards to deploying and dropping database during CI build, can you please give some more information?
    Thank you once again for sharing your comments with us.
    Saturday, February 24, 2018 – 9:12:42 PM – John Shkolnik Back To Top (75291)
    We deploy and drop our database during the CI build. We also run database unit tests using Slacker which is superior to tSQLt (in my opinion.)
    Tfs continuous integration

    Tfs continuous integration
    [youtube]
    Tfs continuous integration Today’s news headlines in english Tfs continuous integration
    Tfs continuous integration
    Check out this tip to learn to implement Database Continuous Integration and Delivery using Visual Studio Team Services.
    Tfs continuous integration
    Tfs continuous integration Tfs continuous integration Tfs continuous integration
    SOURCE: Tfs continuous integration Tfs continuous integration Tfs continuous integration
    #tags#[replace: -,-Tfs continuous integration] Tfs continuous integration#tags#[/b]
    [b]Kabrinskiy Eduard[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  7. We help you stay in comfort while you take care of your business
    CREDITLOANjala

    [b]Eduard Kabrinskiy – Azure devops jira – Кабринский Эдуард

    Azure devops jira
    [youtube]
    Azure devops jira Current news stories Azure devops jira
    Azure Pipelines for Jira
    Instantly track development for Jira issues by adding information from Azure Pipelines
    Instantly track development for Jira issues by adding information from Azure Pipelines
    Instantly track development for Jira issues by adding information from Azure Pipelines

    View pipelines information in Jira Software

    Track associated Jira issues in Azure Pipelines

    Integrate with multiple Azure DevOps orgs
    End-to-end development cycle tracked in Jira issues.
    Accurately know the associated Jira issues in each build and deployment
    Connect multiple Azure DevOps organizations with Jira to help all your teams.
    More details
    Azure Pipelines enable you to continuously build, test and deploy to any platform or cloud. You can have flexible workflows, advanced deployment scenarios like approvals and gates, and use marketplace extensions to support anything you need in your pipelines. You can use hosted Windows, Linux and Mac agents to execute your pipelines.
    This plugin connects Jira Software with Azure Pipelines, enabling full tracking of how and when an issue is delivered.
    It unlocks the following functionalities

    View release status in development section of Jira issues with links to Azure Pipelines.
    View issues in release pipelines with deep links back to Jira. Simply get accurate release notes for each deployment.

    NOTE: This plugin is currently Jira Cloud only, as the APIs we depend on are only available in Jira Cloud. When those APIs become available in Jira Server this extension will work there too.
    Also, the plugin works only for GitHub and Azure Repos. Bitbucket support will be added later.
    Media
    Privacy and security
    Privacy policy
    Atlassian’s privacy policy is not applicable to the use of this app. Please refer to the privacy policy provided by this app’s vendor.
    Resources
    Integration details
    Azure Pipelines for Jira integrates with your Atlassian product. This remote service can:

    Write data to the host application
    Read data from the host application

    Reviews for cloud

    Reviews for server
    There are no reviews yet. Be the first to review this app.
    Reviews for Data Center
    There are no reviews yet. Be the first to review this app.
    Cloud Pricing
    Server Pricing
    Data Center Pricing
    Azure DevOps provides support for this app.
    Vendor support resources
    Find out how this app works.
    Community discussions connect you to the vendor and other customers who use this app.
    Contact

    Open now
    Mon – Fri 24hrs IST
    Email vendor
    Submit a support request
    Go to vendor status page

    Versions
    1.0.9-AC Jira Cloud • Released 2019-09-03
    Summary
    Integrate release pipelines with Jira software v2. Added support for Azure Repos
    Details
    v1.0.9 – This version adds the support for Azure Repos to the plugin.
    Azure Pipelines integration with Jira connects the two services, providing full tracking of how and when the value envisioned with an issue is delivered. Using the integration, teams can setup a tight development cycle from issue creation through release. Key development milestones for an issue can be tracked from Jira. Kindly take note of the following.

    The current version supports tracking when GitHub or Azure repos are linked with Jira and deployed with Azure Pipelines.
    The JIRA issue must be mentioned in commit message or PR title for the integration to work. Mentions in commit details and PR details do not work.

    Resources

    There are no additional resources.

    Installation

    Log into your Jira instance as an admin.
    Click the admin dropdown and choose Add-ons. The Find new apps or Find new add-ons screen loads.
    Locate Azure Pipelines for Jira.
    Click Install to download and install your app.
    You’re all set! Click Close in the Installed and ready to go dialog.

    Similar apps
    v1.0.9 – This version adds the support for Azure Repos to the plugin.
    Azure Pipelines integration with Jira connects the two services, providing full tracking of how and when the value envisioned with an issue is delivered. Using the integration, teams can setup a tight development cycle from issue creation through release. Key development milestones for an issue can be tracked from Jira. Kindly take note of the following.
    v1.0.9 – This version adds the support for Azure Repos to the plugin.
    Azure Pipelines integration with Jira connects the two services, providing full tracking of how and when the value envisioned with an issue is delivered. Using the integration, teams can setup a tight development cycle from issue creation through release. Key development milestones for an issue can be tracked from Jira. Kindly take note of the following.

    The current version supports tracking when GitHub or Azure repos are linked with Jira and deployed with Azure Pipelines.
    The JIRA issue must be mentioned in commit message or PR title for the integration to work. Mentions in commit details and PR details do not work.

    “,”marketplaceAgreementAccepted”:false,”pluginSystemVersion”:”Three”,”instructions”:[],”autoUpdateAllowed”:true,”compatibleApplications”:[<"hostingSupport":,”name”:”Jira”,”introduction”:”Nice one! Your team went JIRA. You’ve got a best-in-class issue tracker, but that’s only the beginning. The Marketplace is brimming with apps for time-tracking, agile project management, test management, and integration with your other systems. Add-ons are ready to help make JIRA the best project tracker you’ve ever used.”,”pluginCount”:7324,”key”:”jira”,”links”:[,,,,,],”atlassianConnectSupport”:,”order”:0,”status”:>],”stable”:true,”buildNumber”:1000009,”summary”:”Integrate release pipelines with Jira software v2. Added support for Azure Repos”,”addOnType”:”Plugins 3″>
    Azure devops jira

    Azure devops jira
    [youtube]
    Azure devops jira Top stories today Azure devops jira
    Azure devops jira
    Instantly track development for Jira issues by adding information from Azure Pipelines
    Azure devops jira
    Azure devops jira Azure devops jira Azure devops jira
    SOURCE: Azure devops jira Azure devops jira Azure devops jira
    #tags#[replace: -,-Azure devops jira] Azure devops jira#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

  8. We help you stay in comfort while you take care of your business
    Minneapolisjala

    [b]Эдуард Кабринский – Azure devops compare commits – Кабринский Эдуард

    Azure devops compare commits
    [youtube]
    Azure devops compare commits News highlights Azure devops compare commits
    Generate Release Notes (Crossplatform)
    Richard Fennell (Black Marble)
    Releases
    Generates release notes for a Classic Build or Release, or a YML based build. The generated file can be any text based format of your choice

    Can be used on any type of Azure DevOps Agents (Windows, Mac or Linux)
    Uses same logic as Azure DevOps Release UI to work out the work items and commits/changesets associated with the release
    3.28.x provide a new array of inDirectlyAssociatedPullRequests , this contains PR associated with a PR’s associated commits. Useful if using a Gitflow work work-flow x866 (see sample template below)
    3.27.x enriches the PR with associated commits (see sample template below)
    3.25.x enriches the PR with associated work items references (you need to do a lookup into the list of work items to get details, see sample template below)
    3.24.x adds labels/tags to the PR
    3.21.x adds an override for the historic pipeline to compare against
    3.8.x adds currentStage variable for multi-stage YAML based builds
    3.6.x adds compareBuildDetails variable for YAML based builds
    3.5.x removed the need to enable OAUTH access for the Agent phase
    3.4.x adds support for getting full commit messages from Bitbucket
    3.1.x adds support for looking for the last successful stage in a multi-stage YAML pipeline. For this to work the stage name must be unique in the pipeline
    3.0.x drops support for the legacy template model, only handlebars templates supported.
    The Azure DevOps REST APIs have a limitation that by default they only return 200 items. As a release could include more Work Items or ChangeSets/Commits. A workaround for this has been added #349. Since version 2.12.x this feature has been defaulted on. To disable it set the variable ReleaseNotes.Fix349 to false

    IMPORTANT – There have been three major versions of this extension, this is because

    V1 which used the preview APIs and is required if using TFS 2018 as this only has older APIs. This version is not longer shipped in the extension, but can be download from GitHub
    V2 was a complete rewrite by @gregpakes using the Node Azure DevOps SDK, with minor but breaking changes in the template format and that oAuth needs enabling on the agent running the tasks. At 2.27.x KennethScott](https://github.com/KennethScott) added support for Handlbars templates.
    V3 removed support for the legacy template model, only handlebars templates supported.

    Overview
    This task generates a release notes file based on a template passed into the tool. It can be using inside a classic Build, a classic Release or a Multistage YAML Pipeline.
    The data source for the generated Release Notes is the Azure DevOps REST API’s comparison calls that are also used by the Azure DevOps UI to show the associated Work items and commit/changesets between two releases. Hence this task should generate the same list of work items and commits/changesets as the Azure DevOps UI, though it can enrich this core data with extra information.

    That this comparison is only done against the primary build artifact linked to a Classic Release
    If used in the build or non-multistage YAML pipeline the release notes are based on the current build only.

    There are many ways that the task can be used. A common pattern is to use the task multiple times in a pipeline

    Run once for every build, so you get the build notes of what is in that build
    Run as part of a deployment stage, checking against the last successful build, to generate the release notes issued with that release.

    Possible sets of parameters depending on your usage are summarized below
    Option Multi Stage YAML Classic Build/Release Generate notes for just the current build Requires checkstages=false parameter Run inside the build Generate notes since the last successful release. Option 1. Place the task in a stage that is only run when you wish to generate release notes. Usually this will be guarded by branch based filters or manual approvals. Requires checkstages=true parameter Run inside the release. Supported and you can override the stage name used for comparison using the overrideStageName parameter Generate notes since the last successful release. Option 2. Set the task to look back for the last successful build that has a given tag Requires checkstages=true and the tags parameters Not supported Generate notes since the last successful release. Option 3. Override the build that the task uses for comparison with a fixed value Requires checkstages=true and the overrideBuildReleaseId parameters Run inside the release. Requires the overrideBuildReleaseId parameter
    You can see the documentation for all the features in the WIKI and the YAML usage here
    The Template
    There are sample templates that just produce basic releases notes for both Git and TFVC based releases. Most samples are for Markdown file generation, but it is possible to generate any other format such as HTML
    The legacy templating format has been be deprecated in V3. The only templating model supported is Handlebars
    Handlebar Templates
    Since 2.27.x it has been possible to create your templates using Handlebars syntax. A template written in this format is as follows

    IMPORTANT Handlebars based templates have different objects available to the legacy template used in V2 of this extension. This is a break change.

    What is done behind the scenes is that each <> block in the template is expanded by Handlebars. The property objects available to get data from at runtime are:
    Common objects

    workItems – the array of work item associated with the release
    commits – the array of commits/changesets associated with the release
    pullRequests – the array of PRs (inc. labels, associated WI links and commits to the source branch) referenced by the commits in the release
    inDirectlyAssociatedPullRequests – the array of PRs (inc. labels, associated WI links and commits to the source branch) referenced by associated commits of the directly linked PRs. #866
    tests – the array of unique tests associated with any of the builds linked to the release or the release itself
    builds – the array of the build artifacts that CS and WI are associated with. Note that this is a object with three properties
    build – the build details
    commits – the commits associated with this build
    workitems – the work items associated with the build
    tests – the work items associated with the build

    relatedWorkItems – the array of all work item associated with the release plus their direct parents or children and/or all parents depending on task parameters

    Release objects (only available in a Classic UI based Releases)

    releaseDetails – the release details of the release that the task was triggered for.
    compareReleaseDetails – the the previous successful release that comparisons are being made against
    releaseTest – the list of test associated with the release e.g. integration tests

    Build objects (available for Classic UI based builds and any YAML based pipelines)

    buildDetails – if running in a build, the build details of the build that the task is running in. If running in a release it is the build that triggered the release.
    compareBuildDetails – the previous successful build that comparisons are being made against, only available if checkstage=true
    currentStage – if checkstage is enable this object is set to the details of the stage in the current build that is being used for the stage check

    Note: To dump all possible values via the template using the custom Handlebars extension <> this runs a custom Handlebars extension to do the expansion. There are also options to dump these raw values to the build console log or to a file. (See below)

    Note: if a field contains escaped HTML encode data this can be returned its original format with triple brackets format <<>>

    Handlebar Extensions
    With 2.28.x support was added for Handlebars extensions in a number of ways:
    The Handlebars Helpers extension library is also pre-load, this provides over 120 useful extensions to aid in data manipulation when templating. They are used the form
    In addition to the Handlebars Helpers extension library, there is also custom Helpers pre-loaded specific to the needs of this Azure DevOps task

    json that will dump the contents of any object. This is useful when working out what can be displayed in a template, though there are other ways to dump objects to files (see below)

    lookup_a_work_item this looks up a work item in the global array of work items based on a work item relations URL

    lookup_a_pullrequest this looks up a pull request item in the global array of pull requests based on a work item relations URL

    get_only_message_firstline this gets just the first line of a multi-line commit message
    lookup_a_pullrequest_by_merge_commit this looks up a pull request item in an array of pull requests based on a last merge commit ID

    Finally there is also support for your own custom extension libraries. These are provided via an Azure DevOps task parameter holding a block of JavaScript which is loaded into the Handlebars templating engine. They are entered in the YAML as a single line or a multi-line parameter as follows using the | operator
    and can be consumed in a template as shown below
    As custom modules allows any JavaScript logic to be inject for bespoke need they can be solution to your own bespoke filtering and sorting needs. You can find sample of custom modules the the Handlebars section of the sample templates e.g. to perform a sorted foreach.
    Task Parameters
    Once the extension is added to your Azure DevOps Server (TFS) or Azure DevOps Services, the task should be available in the utilities section of ‘add tasks’
    IMPORTANT – The V2 Task requires that oAuth access is enabled on agent running the task, this requirement has been removed in V3
    The task takes the following parameters

    The output file name, for builds this will normally be set to $(Build.ArtifactStagingDirectory)\releasenotes.md as the release notes will usually be part of your build artifacts. For release management usage the parameter should be set to something like $(System.DefaultWorkingDirectory)\releasenotes.md . Where you choose to send the created files is down to your deployment needs.
    A picker allows you to set if the template is provided as a file in source control or an inline file. The setting of this picker effects which other parameters are shown
    Either, the template file name, which should point to a file in source control.
    Or, the template text.

    Check Stage – If true a comparison is made against the last build that was successful to the current stage, or overrideStageName if specified (Build Only)
    (Advanced) Empty set text – the text to place in the results file if there is no changeset/commit or WI content
    (Advanced) Name of the release stage to look for the last successful release in, defaults to empty value so uses the current stage of the release that the task is running in.
    (Advanced) Do not generate release notes of a re-deploy. If this is set, and a re-deploy occurs the task will succeeds with a warning
    (Advanced) Primary Only. If this is set only WI and CS associated with primary artifact are listed, default is false so all artifacts scanned.
    (Advanced) Replace File. If this is set the output overwrites and file already present.
    (Advanced) Append To File. If this is set, and replace file is false then then output is appended to the output file. If false it is preprended.
    (Advanced) Cross Project For PRs. If true will try to match commits to Azure DevOps PR cross project within the organisation, if false only searches the Team Project.
    (Advanced) Override Stage Name. If set uses this stage name to find the last successful deployment, as opposed to the currently active stage
    (Advanced) GitHub PAT. (Optional) This GitHub PAT is only required to expand commit messages stored in a private GitHub repos. This PAT is not required for commit in Azure DevOps public or private repos or public GitHub repos
    (Advanced) BitBucket User (Optional) To expand commit messages stored in a private Bitbucket repos a BitBucker user and app password need to be provided, it is not required for repo stored in Azure DevOps or public Bitbucket repos.
    (Advanced) BitBucket Password (Optional) See BitBucket User documentation above
    (Advanced) Dump Payload to Console – If true the data objects passed to the file generator is dumped to the log.
    (Advanced) Dump Payload to File – If true the data objects passed to the file generator is dumped to a JSON file.
    (Advanced) Dump Payload Filename – The filename to dump the data objects passed to the file generator
    (Advanced) Get Direct Parent and Children for associated work items, defaults to false
    (Advanced) Get All Parents for associated work items, recursing back to workitems with no parents e.g. up to Epics, defaults to false
    (Advanced) Tags – a comma separated list of pipeline tags that must all be matched when looking for previous successful builds , only used if checkStage=true
    (Advanced) OverridePat – a means to inject a Personal Access Token to use in place of the Build Agent OAUTH token. This option will only be used in very rare situations usually after a support issue has been raised, defaults to empty
    (Advanced) getIndirectPullRequests – if enabled an attempt will be made to populate a list of indirectly associated PRs i.e PR that are associated with a PR’s associated commits #866
    (Advanced) OverrideBuildReleaseId – For releases or multi-stage YAML this parameter provides a means to set the ID of the ‘last good release’ to compare against. If the specified release/build is not found then the task will exit with an error. The override behaviour is as follows
    (Default) Parameter undefined – Old behaviour, looks for last successful build using optional stage and tag filters
    Empty string – Old behaviour, looks for last successful build using optional stage and tag filters
    A valid build ID (int) – Use the build ID as the comparison
    An invalid build ID (int) – If a valid build cannot be found then the task exits with a message
    Any other non empty input value – The task exits with an error message

    (Handlebars) customHandlebars ExtensionCode. A custom Handlebars extension written as a JavaScript module e.g. module.exports = <foo: function () >;
    (Outputs) Optional: Name of the variable that release notes contents will be copied into for use in other tasks. As an output variable equates to an environment variable, so there is a limit on the maximum size. For larger release notes it is best to save the file locally as opposed to using an output variable.

    Output location
    When using this task within a build then it is sensible to place the release notes files as a build artifacts.
    However, within a release there are no such artifacts location. Hence, it is recommended that a task such as the WIKI Updater is used to upload the resultant file to a WIKI. Though there are other options such as store the file on a UNC share, in an Azure DevOps Artifact or sent as an email.
    Local Testing of the Task & Templates
    To speed the development of templates, with version 2.50.x, a tool is provided in this repo to allow local testing.
    Also there are now parameters (see above) to dump all the REST API payload data to the console or a file to make discovery of the data available in a template easier.
    Azure devops compare commits

    Azure devops compare commits
    [youtube]
    Azure devops compare commits National news Azure devops compare commits
    Azure devops compare commits
    Extension for Azure DevOps – Generates a Markdown Release notes file.
    Azure devops compare commits
    Azure devops compare commits Azure devops compare commits Azure devops compare commits
    SOURCE: Azure devops compare commits Azure devops compare commits Azure devops compare commits
    #tags#[replace: -,-Azure devops compare commits] Azure devops compare commits#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]latest news today[/url]

    3 years ago Reply

  9. We help you stay in comfort while you take care of your business
    GreenBayjala

    [b]Кабринский Эдуард – Azure continuous delivery – Kabrinskiy Eduard

    Azure continuous delivery
    [youtube]
    Azure continuous delivery Recent news stories Azure continuous delivery
    Continuous Delivery to Windows Azure Web Sites (or IIS)
    In this tutorial, we’ll go over the basics of these and see how we can deploy an ASP.NET MVC project to IIS or Windows Azure Web Sites from our TeamCity server using WebDeploy.
    Deploying ASP.NET applications can be done in a multitude of ways. Some build the application on a workstation and then xcopy it over to the target server. Some use a build server, download the artifacts, change the configuration files and xcopy those over to the server. The issue with that arises when something bad creeps in: deployments become unpredictable. What if there are leftovers of unnecessary or old assemblies on that workstation we’re xcopying from? What if we forget to change the database connection string in Web.config and mess up that release? How do we quickly roll back if that happens? The .NET stack has a solution to this: Configuration Transforms and WebDeploy.
    Configuration Transforms

    One of the things that typically have to happen during deployment is making changes to the configuration. Changing the database connection string, changing ASP.NET settings to no longer show us YSOD’s and so on. Don’t hard-code these things or write a big if-else statement based on the server’s hostname to figure out the configuration. Instead, use something like configuration transforms.
    Configuration transforms are files that describe “transformations” to Web.config, based on the build configuration being used. Building the Release configuration? Then Web.config will be updated with the rules described in Web.Release.config. Let’s remove the debug attribute from our configuration when doing a Release build:

    A typical ASP.NET application created in Visual Studio will contain transforms for Debug and Release builds, but they can be added by creating a new build configuration (through the Build | Configuration Manager menu) and then using the context menu Add Config Transform.
    For this tutorial, I’ve created 2 new configurations: Development and Production, and generated 2 new configuration transforms as well ( Web.Development.config and Web.Production.config).
    To test the config transform, we can make use of the context menu Preview Transform, which will show us exactly what the resulting configuration file is going to look like. The following is the result of running the Web.Release.config transform:

    We can use this to virtually change or add any setting we’d like to change. Connection strings, file paths, app settings, diagnostics configuration and so on. Here’s some more documentation on what you can do with config transforms.
    WebDeploy
    For several versions, Visual Studio has had the option to create so-called “web packages” for any ASP.NET application, containing all files required to run the app. Pages, images, CSS, JavaScript and the application binaries can be exported in such package. It’s even possible to include databases and IIS settings!
    These deployment packages can be used together with WebDeploy, a tool which can upload the package to a server using various protocols and can apply the config transforms we’ve talked about earlier.
    But before we deploy, let’s first see how we can create a deployment package. And just so we learn about the package format, let’s first do this manually by invoking msbuild.
    Manually creating a deployment package
    Deployment packages can be created by running the Package build target on the project, which can easily be done using msbuild:
    The project will be compiled and a new folder created, containing our deployment package. And more!

    The ZIP file contains our application, the other files are supporting files for deploying to a target machine. An interesting file is AcmeCompany.Portal.SetParameters.xml. It contains the result of our config transforms, but allows for overriding these values. Why? Well, the person building the deployment package may not know the connection string. Imagine only an administrator knows? That person can override the setting with the correct, final connection string for production through this file.
    The AcmeCompany.Portal.deploy.cmd batch file can be run to deploy to a target environment, but. how does that work?
    WebDeploy can make use of several methods to transfer the deployment package to a remote server and update configuration. It can be done using WebDeploy (an HTTPS based protocol), FTP or using a File Share. For the first option, some additional tools should be enabled on the target IIS server. With good reason: the WebDeploy server-side tool will do real synchronization between sites and delete redundant content from the server. For FTP or a file share, no additional tools are required.
    For the remainder of this tutorial, we will be covering deployment to Windows Azure Web Sites using WebDeploy, which is identical to how it works on IIS.
    Step 1: Configuring deployment packages / WebDeploy with Visual Studio
    In the previous step, we’ve created a deployment package manually and we would also have to invoke WebDeploy manually. There is an easier way though: configuring deployment packages and WebDeploy in one go, from Visual Studio.
    From the web application that should be deployed, use the context menu on the project node and click Publish. This will open up a dialog where we can do some configuration related to our deployment. We can even create multiple deployment profiles, for example one for staging and one for production.
    In the first step, we have to specify destination server details. This would typically be the HTTPS endpoint to the WebDeploy host (or FTP or file share details if that option was selected). After providing all details, we can validate the connection to see if it works.

    Note that instead of going through the entire wizard, Windows Azure Web Sites tooling allows importing the publish profile from the Windows Azure Management Portal. I’m showing the entire process here for when deploying to IIS.
    Does a password have to be specified? No! In case the developer doesn’t know it, credentials can be left blank; we’ll provide the username and password later on when deploying from TeamCity.
    In the next step, we can specify some deployment specifics: should files that are not in the deployment package be deleted from the target server? Should the application be precompiled? Should the database connection string be overridden? And when using Entity Framework Code First: should migrations be executed?

    We can close the wizard after this step, and save the publish settings just created into a file in our project:

    This is just an XML file and we can edit it if needed. And actually we should, to make our life easier later on. Open the XML file and find the element. When running the WebDeploy packaging step from the command line (which TeamCity will effectively do), this location will not be found. To resolve this, change the element value and prefix the path with _$(SolutionDir)_ . Here’s an example of what this element could look like:
    Save the file and make sure it is added to source control so we can make use of it when running the deployment on TeamCity.
    Step 2: Setting up the continuous integration build on TeamCity
    We want to have a continuous integration (CI) build for our project, which we can trigger on every VCS check-in. This CI build will provide us with immediate feedback on the project’s build status and health.
    TeamCity 8.1 allows us to create a project based on a VCS URL. We can simply enter the URL to a git, Mercurial, Subversion, . repository:

    This repository will be analyzed and scanned for build steps. In our case, TeamCity discovered a Visual Studio 2013 build step which we can immediately add to our build configuration:

    Adding the suggested build step will result in a working build if we run it. We can specify artifact paths, version number and so on. One thing is missing though! The WebDeploy deployment package is nowhere to be seen. The reason for this is we are building the Rebuild target, which simply rebuilds our project without packaging. To solve this, we can add some additional command line parameters to our build step:

    Here’s what these parameters do:
    /p:DeployOnBuild=True – triggers WebDeploy packaging
    /p:PublishProfile=”Development” – specifies the deployment profile to use when packaging
    /p:ProfileTransformWebConfigEnabled=False – let’s discuss this one in detail!
    Standard configuration transformations are run in an early stage, but WebDeploy runs another transformation using the setting from our publish profile. This causes earlier configuration transformations to be overwritten, which we don’t want to happen. Disabling the ProfileTransformWebConfigEnabled parameter avoids running this additional configuration transformation.
    If we now run the build again (having specified artifacts\webdeploy\Development => Webdeploy as the artifact path, which is the path we configured in the publish profile earlier on), we will see a familiar set of files published as artifacts:

    Now let’s see if we can set up the actual deployment as well!
    Step 3: Setting up the deployment on TeamCity
    The strategy we’ll be using for our deployments is described in the How To. We will be creating a new build configuration for every target environment we want to deploy to. These new build configurations will:
    Perform the deployment
    What we want to achieve is this nice waterfall, where we can promote our build from CI to development to staging to production, or whichever environments we have in between CI and production.

    From the TeamCity Administration, copy the CI build configuration and name it differently, for example Deploy to Windows Azure Web Sites – Development. Next, we will make some changes to the build configuration.
    Let’s start by specifying build dependencies. Under the build configuration’s Dependencies, add a new snapshot dependency on our CI build. This will ensure that deployment will only be possible if a matching CI build has passed completely, and that the deployment will be based on the exact same VCS revision as we built during CI.

    We want to be able to identify the build numbers throughout the entire chain of deployments. For example, if CI build 1.0.0 is deployed to staging, we want to be sure that this is actually version 1.0.0 and not some intermediate version. Under General Settings, change the build number format to use the same version number as the originating CI build. The build number format will have to be similar to %dep.WebAcmeCorpPortal_ContinuousIntegration.build.number%, duplicating the version number from the CI build.
    Our CI build was building the default configuration for our solution. Since we are now deploying to a different environment and we’ve created deployment configurations (and configuration transforms) for Development and Production, let’s change the build configuration through the Visual Studio build step.

    Now comes the actual deployment step! Up until now, we have built our project but we haven’t really done anything to ship it to an actual server. Let’s change that by adding a new build step based on a Command Line runner. As the build script, enter the following:
    That’s quite a bit, right? Let’s go through this command:
    “C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe” is the path to the msdeploy.exe which has to be available on the build agent.
    -source:package=’artifacts\WebDeploy \AcmeCompany.Portal.zip’ specifies the deployment package we want to upload
    -dest:auto,computerName=”https:// :443/msdeploy.axd?site= “,userName=” “,password=” “,authtype=”Basic”,includeAcls=”False” specifies the URL to the deployment service. For Windows Azure Web Sites, this will be in the aforementioned format. For IIS, this may be different (see Sayed Ibrahim Ashimi’s excellent post on WebDeploy parameters)
    -verb:sync tells WebDeploy to synchronize only changed files (this will drastically reduce deployment time as not all files will be uploaded for every deployment)
    -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension are used to disable certain configuration steps on the remote machine. These may be different for your environment, see MSDN for a complete list.
    -setParamFile:”msdeploy\parameters \AcmeCompany.Portal.SetParameters.xml” is an important one. It specifies the WebDeploy parameters that will be replaced in the deployed Web.config file on the remote server, for example the connection string. More on this file in a second.
    The parameters file passed to the msdeploy.exe has to be created somehow. We’ve seen the build artifacts for our CI build contained a copy of this file and that one can be used if deployment secrets (such as the production database connection string) are available in source control. We probably don’t want this, at least not in the same source control root our developers are all using.
    Instead of storing passwords in a separate VCS root, they can also be added as a Typed Parameters of type password in TeamCity. This will require creating the configuration file during the deployment, based on these configuration parameters.
    For my setup, I’ve customized the AcmeCompany.Portal.SetParameters.xml file and put the configurations for the different target environments in a second VCS root, only available to the TeamCity server. This keeps the database connections strings a secret to everyone but TeamCity.

    We can repeat these steps to create a build configuration for staging, for QA, for production and so on. Since we want to promote builds over this entire chain, these configurations should all have a snapshot dependency on the previous environment.
    Here’s what this could look like: 3 different build configurations, denoting different versions that are deployed to each target environment:

    Step 4: Promoting CI builds
    Now that we have everything in place, let’s see how we can promote builds from one environment to another. When we navigate to the build results of a CI build, we can use the Actions dropdown to promote our build to the next environment.

    Having configured the snapshot dependencies for our build configurations, TeamCity knows what the next environment should be: development.

    This will trigger a new build that will deploy version 1.1.3 to the development environment. Once validated, we can navigate to that build’s results and promote the build to the next environment.
    Because of the snapshot dependencies we created, we can now also go to any build’s Dependencies tab and see the environments where it has been deployed to. Here’s build 1.1.3 as seen from development. We can see a CI build has been made, deployment to development has been done and deployment to production is still running:

    For a build configuration with snapshot dependencies, we can enable showing of changes from these dependencies using the Show changes from snapshot dependencies version control setting. This enables us to see exactly which changes are deployed. See Build Dependencies Setup for more information.
    Conclusion
    By thinking of a deployment as a chain of builds, doing deployments from TeamCity is not too hard. In this tutorial, we’ve used WebDeploy as an example means of transferring build artifacts to a target environment, but this could also have been another solution (like xcopy).
    Using VCS labeling, it’s also possible to label sources when a specific deployment happens. By pinning builds (optionally through the TeamCity API), we can make sure that build clean-up does not remove certain builds and artifacts.
    Azure continuous delivery

    Azure continuous delivery
    [youtube]
    Azure continuous delivery Today’s national news headlines in english Azure continuous delivery
    Azure continuous delivery
    Continuous Delivery to Windows Azure Web Sites (or IIS) In this tutorial, we’ll go over the basics of these and see how we can deploy an ASP.NET MVC project to IIS or Windows Azure Web Sites from
    Azure continuous delivery
    Azure continuous delivery Azure continuous delivery Azure continuous delivery
    SOURCE: Azure continuous delivery Azure continuous delivery Azure continuous delivery
    #tags#[replace: -,-Azure continuous delivery] Azure continuous delivery#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]news[/url]

    3 years ago Reply

  10. We help you stay in comfort while you take care of your business
    NewOrleansjala

    [b]Кабринский Эдуард – Ansible continuous deployment – Кабринский Эдуард

    Ansible continuous deployment
    [youtube]
    Ansible continuous deployment Local news Ansible continuous deployment
    Continuous delivery vs. continuous deployment
    Speed high-quality code to customers with these two automation practices.
    What are continuous delivery and continuous deployment?
    Along with continuous integration, continuous delivery and continuous deployment are practices that automate phases of software delivery. These practices enable development teams to release new features, enhancements, and fixes to their customers with greater speed, accuracy, and productivity.
    Continuous delivery and continuous deployment have a lot in common. To understand the differences between these practices—and find out which one you want to implement—we need to identify the phases of software delivery we can automate.
    Automating the software delivery process

    Consumers demand increasing personalization and security from products. To meet those demands and deliver software faster and more reliably, development teams can adopt a DevOps culture.
    A DevOps culture breaks down siloed disciplines and unifies people, process, and technology to improve collaboration and coordination. As a result, code changes reach production—and new value reaches the customer—as soon as possible.
    Though development, IT operations, quality engineering, and security teams all work closely together under DevOps, the software delivery process remains just as complex. DevOps organizes software delivery into four phases: plan, develop, deliver, deploy, and operate.
    Software delivery in DevOps
    Without automation, development teams must manually build, test, and deploy software, which includes:

    Checking in, testing, and validating code.
    Merging code changes into the main branch.
    Preparing code to go live.
    Creating a deployable artifact.
    Pushing code to production.

    Phases to automate
    Continuous integration, continuous delivery, and continuous deployment are all practices that automate aspects of the develop and deliver phases. And each practice takes the automation one step further, starting with continuous integration.

    The difference between continuous delivery and continuous deployment
    Continuous integration
    To describe continuous delivery and continuous deployment, we’ll start with continuous integration. Under continuous integration, the develop phase—building and testing code—is fully automated. Each time you commit code, changes are validated and merged to the master branch, and the code is packaged in a build artifact.

    Continuous delivery
    Continuous delivery automates the next phase: deliver. Under continuous delivery, anytime a new build artifact is available, the artifact is automatically placed in the desired environment and deployed.

    Continuous integration and continuous delivery (CI/CD)
    When teams implement both continuous integration and continuous delivery (CI/CD), the develop and the deliver phases are automated. Code remains ready for production at any time. All teams must do is manually trigger the transition from develop to deploy—making the automated build artifact available for automatic deployment—which can be as simple as pressing a button.

    Continuous deployment
    With continuous deployment, you automate the entire process from code commit to production. The trigger between the develop and deliver phases is automatic, so code changes are pushed live once they receive validation and pass all tests. This means customers receive improvements as soon as they’re available.

    Continuous delivery vs. continuous deployment: which one should you choose?
    Whether you adopt continuous delivery or continuous development, you’ll find tools to support you.
    Before you consider which of these practices to implement, determine if your organization has a DevOps culture that can support them. Next, because DevOps teams strive to automate the entire software delivery process, the question is not “which one is better?” Instead ask, “do we need a manual trigger between continuous integration and continuous delivery?”
    As you look for the answer, use these questions to guide you:

    Can you deploy without approval from stakeholders?
    Do your system and gating requirements allow for end-to-end automation?
    Can you expose your customers to production changes a little at a time?
    Does your organization respond to errors in production quickly?

    If you answered yes to all, you may want to consider practicing continuous deployment and automate software delivery completely—from code commit to production.
    If you answered no to any, you may need to start with continuous integration and continuous delivery (CI/CD). You’ll automate the creation of production-ready code that’s always just one manual approval from deployment. Over time, you can work toward continuous deployment and full automation of your software delivery process.
    Either way, you’ll gain the common advantages of each practice:

    Changes are automatically built, validated, and tested.
    Code is always deployable—no more release-day anxiety.
    Releases receive faster stakeholder and customer feedback.
    Developers are more productive with fewer manual and administrative tasks.
    Since changes are small and frequent, failures are rare and create minimal instability.

    Tools for continuous integration, continuous delivery, and continuous deployment
    DevOps teams rely on toolchains—series of connected software development programs—to automate software delivery. The tools you’ll use depend on which automation practice you choose, and which phases that practice automates. Here are some examples.
    Ansible continuous deployment

    Ansible continuous deployment
    [youtube]
    Ansible continuous deployment Latest breaking news Ansible continuous deployment
    Ansible continuous deployment
    Automate software delivery with continuous delivery and continuous deployment. Learn the difference and see which one you can implement.
    Ansible continuous deployment
    Ansible continuous deployment Ansible continuous deployment Ansible continuous deployment
    SOURCE: Ansible continuous deployment Ansible continuous deployment Ansible continuous deployment
    #tags#[replace: -,-Ansible continuous deployment] Ansible continuous deployment#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]headline news[/url]

    3 years ago Reply

  11. We help you stay in comfort while you take care of your business
    NorthLasVegasjala

    [b]Кабринский Эдуард – Pluralsight devops – Кабринский Эдуард

    Pluralsight devops
    [youtube]
    Pluralsight devops Top news stories today Pluralsight devops
    What is DevOps and Why Should I Care?
    The great thing about the definition of DevOps is: there are so many to choose from. Many articles, books, and courses have been written about our Industry’s latest favorite buzzword. Websites, conferences, and entire companies have been founded around it.
    We have thought leaders, practitioners, and people trying to catch on to the wave but the one thing we don’t have is a clear definition of what it is. This article should give you a better idea.
    This is the first in a series of articles explaining DevOps practices, and we’ll start at a very high level and dig down from there.
    So What is DevOps Anyway?
    DevOps is not software, or just a methodology, it’s a culture. You can’t buy it or download it, but you can learn it. Most definitions start with the idea that Operations and Development must work together and throw in automation for good measure. But it’s much more that that.
    DevOps is NOT:

    Merely automating everything
    Moving to containers / microservices
    Just continuous integration

    There are a lot of myths and misconceptions surrounding this. You might here things like this:

    “Throw some Kubernetes in there so we can be DevOps!”

    You can’t just implement containers, some automation, and CI/CD and say “we are DevOps now”. These things enable DevOps but it’s more about the people than the technology.

    “Developers are now Operations people, and vice versa”

    While this is true, it’s usually exaggerated. There is more overlap between these groups than before, but roles are still clearly defined. Just telling your engineers they have to provision their own machines now isn’t adopting DevOps.

    DevOps is just releasing things every hour like big companies do.

    While this may be a goal, it’s just a small part of the overall intention of DevOps culture. Releasing bad code every hour isn’t an improvement.
    DevOps is:
    A mindset and methodology that:

    Fosters communication and collaboration
    Focuses on rapid IT delivery
    Leverages tools to move quickly

    DevOps is focused on shortening the Software Development Lifecycle (SDLC) using feedback loops to develop features and fix bugs at an increase rate.
    A large part of DevOps is focusing on smaller, more frequent releases. In the past we released new versions of software over months or years, we now do it in days or hours.
    I’ll focus on the concept of batch size and WIP because they’re very important to this process.

    The overall goal of DevOps is to increase speed and quality without burning out your engineers.

    The Main Principles of DevOps
    The main principles of DevOps are outlined clearly in the three ways of DevOps.
    Way #1: Systems Thinking

    The first way is systems thinking. Rather than focusing on a fast IT department or fast developer group, they both worth together as a system. Their goal is to go from idea to feature as quickly as possible. This is your value stream. The idea behind it is simple: once it’s decided a feature needs to be made, developers create it and IT deploys it as quickly as humanly possible.
    Automation plays a big part here because it enables you to push code through the system as quickly as possible.
    When successful you’re committing code, that’s automatically pushed, tested, and integrated to the customer quickly and efficiently.
    Way #2: Amplify Feedback Loops

    The 2nd way has to do with creating a feedback loop. Developers are pushing code to the Ops team, and the Ops team is giving feedback about that software. Then developers change or improve the software based on that feedback. It continues in a loop.
    Some examples of IT feedback:
    Way #3: Culture of Continual Experimentation and Learning

    The third way builds on that feedback loop and creates more self reinforcing feedback loops. It encourages continual experimentation in short iterations. Learning from mistakes and benefiting from improvements.
    The experimentation component here is huge. DevOps is a learning process. It’s about coming up with ideas, implementing them, and learning from them as quickly as possible. You can’t do that on six-month release cycles. You can’t do that if you’re burning time moving software from place to place.

    DevOps creates a culture of learning that works in small, efficient workloads and using automation to free up people for meaningful work.

    So let’s break down some problems and see why we bother with DevOps in the first place.
    What Problems Does DevOps Solve?
    I’ve been in IT and Development for nearly 2 decades. To be more precise, I’m “I remember when IT was a cost center” years old. I still remember the first time I heard the term “DevOps” around 2011. One of my enthusiastic tech friends started ranting on about how IT operations and Developers would have to merge their duties and it was going to change the world.
    My first thought was “SWEET! I love IT operations and I’m a developer. I get to provision machines now? awesome”. No sarcasm intended, I’ve always loved the IT side of things, years after leaving it. The idea that I would get to do IT stuff as a developer was appealing. That meant low resistance for me personally, and likely why I pursued it so hard.
    I now have a better understanding of DevOps than I did then, and I still don’t feel like an expert. It’s ever evolving and changing by nature. But I have seen a lot of changes come directly from adopting a DevOps mindset.

    One of the greatest intentions of DevOps is removing bottlenecks

    There have always been bottlenecks in development and there always will be. It may be a person, it may be a technology, it may be bureaucracy. DevOps cultures try to minimize all bottlenecks if possible.
    1. Release Cadence
    How many times have you heard this phrase?

    We must wait until the release goes out to focus on that

    Having large batch sizes and huge manual deployments means each software release is a huge event. Bugs and features have been carefully triaged by the team and we’ve decided what will be included in the next release. Everything else is sidelined.
    That’s fine unless something crucial comes up in the six months you’re spending developing the new software. Security and compliant issues come to mind here. If you have a big security flaw, you have to build a patch and get it fixed now, not months from now. So being a good engineer you find a way to work it into the software for now.
    As more of these pile up you watch your deadlines get pushed further out while you work on hot fixes and implementations, at the same including those fixes into the new software.
    How DevOps solves this problem: by making software releases a less remarkable event. We aren’t bringing pizzas and a cake to the team for a software deployment anymore. It’s happening every week, every day, or even every hour.
    By breaking the work into smaller chunks and continually integrating you no longer have to wait months to implement changes.
    2. The Overworked Engineer
    How about this phrase?

    We must wait for Jane to fix that. It will be awhile, she has a lot on her plate right now.

    If you’ve worked in this industry longer than a week, you know this engineer. You’ve probably been this engineer. The person who knows so much about one particular area or technology that they take ownership of it completely.
    They’re smart, driven, and crucial to the success of the project, maybe even the whole company. They might be an infrastructure engineer, a developer, or any number of roles and act as a “gatekeeper” for software.
    This type of bottleneck creates a host of problems for your team. Software can’t move quickly without their intervention. More importantly, they’re working too hard. They’re sacrificing personal time with family or friends, and constantly in a state of pressure. They’re vulnerable to health issues, or leaving the organization.
    How DevOps solves this problem: By breaking down silos and spreading responsibility among more members of the team. By breaking work down into smaller tasks to improve flow. This relieves the pressure on any one individual to enable work flow.
    3. Unplanned Infrastructure Requests

    I need a server for this, and we can’t move forward until we get it

    Early in my career I did provisioning. This was my usual process:

    Gather requirements
    Design a server
    Order the parts (if not on hand)
    Assemble the machine
    Put an Operating System on it
    Configure Software
    Find space on the rack
    Plug it in
    Establish access for the person needing it

    I did this over and over for people day after day. Don’t get me wrong, I loved it but I’m also glad this isn’t how we do things anymore. This stuff took time, and I can remember being able to do about 5 machines concurrently before I started mixing things up and missing details.
    Last year, my process for getting a server was much different.

    Gather requirements
    Load up Jenkins, select a profile
    Throw some parameters in if necessary
    Click a button
    Get IP address of server

    The process wasn’t much different if I had to log into Azure and deploy one there.
    Guess which one of these takes longer? Now this problem is being solved by virtualization and cloud platforms, but DevOps is a big driver of it.
    How DevOps Solves This Problem: DevOps culture pushes the idea of automating everything, including server configuration or container deployment. We used to name our servers like pets, now we number them like cattle. They should be instances of a common design rather than a special entity.
    4. Testing Phase Slowdowns

    We won’t have the fix out for another week or two, it’s still going through testing.

    Testing and QA used to be an insanely bad process. I can remember developing software that didn’t see the light of day for months. This was in part to the old school waterfall cadence of releasing large chunks of software at a time, and partially due to testing and how long it took.
    It wasn’t the fault of the testers. We dumped piles of software on them at once with a big list of features and bugs we fixed. probably. The iterations were so large that by the time a tester had found a bug we’d already piled a bunch of software on top of it. So the regressions were widespread by then.
    How DevOps Solves This Problem: Two crucial changes here:
    Batch size: With smaller batches of work you have a lower risk, and a single small feature or bug to check. Testing completes much faster
    Automation: With DevOps, so much of testing is automated now that you can write code, commit it and watch it go through the testing processes. People testing are not obsolete, however as they are building the test automation and spot checking things as they go.
    DevOps frees testers to be more productive.
    Big Picture
    DevOps can be a huge cultural shift for your organization. However, it’s crucial in this day and age.

    Companies are innovating faster than ever before
    The technical barrier to entry is lower than it’s ever been
    You can improve speed without sacrificing quality

    These are the reasons everyone is talking about DevOps. This is why people are scrambling to implement new techniques.
    This is the first of a series where I will explain DevOps principles, and how you can apply them. We’ll explore some use cases and practices to help you as a developer, operations person, or manager to get the most out of this cultural movement.
    If you want to learn more about DevOps in the meantime,

    Read The DevOps Handbook
    Check out Pluralsight’s DevOps Course Path
    Follow me on Dev.To to be notified of future articles

    NoteL In December I’ll be doing a Webinar titled “Enabling Low Risk Releases” where I’ll talk about how you can use DevOps principles to improve your release velocity. Sign up here if you want to join me!
    Pluralsight devops

    Pluralsight devops
    [youtube]
    Pluralsight devops English news Pluralsight devops
    Pluralsight devops
    Breaking some of the myths and misconceptions about DevOps. Tagged with devops, beginners, programming.
    Pluralsight devops
    Pluralsight devops Pluralsight devops Pluralsight devops
    SOURCE: Pluralsight devops Pluralsight devops Pluralsight devops
    #tags#[replace: -,-Pluralsight devops] Pluralsight devops#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]today’s news headlines[/url]

    3 years ago Reply

  12. We help you stay in comfort while you take care of your business
    Shreveportjala

    [b]Kabrinskiy Eduard – Azure devops working directory – Кабринский Эдуард

    Azure devops working directory
    [youtube]
    Azure devops working directory Top news stories Azure devops working directory
    How to configure VS Code for Azure DevOps
    Today we have another blog post from our guest blogger Gourav Kumar. This time he writes about using VSCode for Azure DevOps and making the connection.
    How to configure VS code for Azure DevOps
    1. Overview
    This article provides details about how to use VS code for Azure DevOps along with GiT repositories. The good part about this setup is – it is very easy to setup and requires very little computing power.

    CPU = 1.6 GHz or faster processor
    RAM = 4 GB
    Storage = 2 – 5 GB
    Supported OS = Windows, MAC OS and Linux (Deb and Rpm both).

    This document explains the configuration needed for windows 10 OS, but as mentioned above VSCode is supported on another OS as well. The overall setup an be divided in three high level steps :

    Installation
    Setup and Configuration
    Using VSCode with Azure DevOps and Terraform.

    2. Installation

    Download the Visual Studio Code installer for Windows.

    For installation of VSCode, download the installation package from : https://code.visualstudio.com/download
    Installation package is different for each OS, so download correct package as per the OS you want to use.

    Once it is downloaded, run the installer (VSCodeUserSetup-.exe). This will only take a minute.

    By default, VS Code is installed under C:\users\\AppData\Local\Programs\Microsoft VS Code.

    3. Configuration
    After completion of the installation, a shortcut for Visual Studio Code is created on desktop as shown below.

    Click on the icon to launch the VS Code and this will launch an empty Homepage for VS Code with no configurations.

    In order to configure VSCode for Azure DevOps and Terraform we need below mentioned (Azure and Terraform) extensions installed on VSCode.

    Azure Account, The Azure Account extension provides a single Azure sign-in and subscription filtering experience for all other Azure extensions. It makes Azure’s Cloud Shell service available in VS Code’s integrated terminal.
    Azure Repos, This extension allows you to connect to Azure DevOps Services and Team Foundation Server and provides support for Team Foundation Version Control (TFVC). It allows you to monitor your builds and manage your pull requests and work items for your TFVC or Git source repositories. The extension uses your local repository information to connect to either Azure DevOps Services or Team Foundation Server 2015 Update 2 (and later).
    Azure Resource Manager (ARM) Tools, This extension provides language support for Azure Resource Manager deployment templates and template language expressions.
    Azure Terraform, The VSCode Azure Terraform extension is designed to increase developer productivity authoring, testing and using Terraform with Azure. The extension provides terraform command support, resource graph visualization and CloudShell integration inside VSCode.
    PowerShell, This extension provides rich PowerShell language support for Visual Studio Code. Now you can write and debug PowerShell scripts using the excellent IDE-like interface that Visual Studio Code provides.
    Terraform, Syntax highlighting, linting, formatting, and validation for Hashicorp’s Terraform

    To install the extensions in VS Code, hover over to the extension tab given in VS Code or press CTRL+SHIFT+X.

    In the search tab, search extensions mentioned above one by one and install. After successful installation of extensions, an extension tab is visible as shown in below mentioned screenshot.

    Note: – Make sure extensions are installed and are in enabled state by clicking on each extension and verify it is in enabled state.
    4. Using VScode with Azure Devops and Terraform
    The final step in this process is to start working with Azure DevOps and other repo.
    But before doing that, Please google about Azure Project and pat token creation that we will need now during clone.
    Create a Test-Project in Azure DevOps and clone this in VS Code.

    To clone this open project, in the right hand side we could see three dots available, on clicking those dots it will ask to clone it in any IDE however by-default it has set to VS Code. All you need to do is click on the block and follow the pop instructions to clone the entire project.
    Now your project would be visible in VSCode.

    4.1 Overview of the VSCode user interface.
    On the very left side, we can see all installed extensions and we can click them to explore them and their features.

    The very first icon is known as explorer. As the names tell itself this opens our entire repo files or script block in VS Code.
    The second lens icon is helpful to find a keyword and replace a keyword. (Like CTRL+F and CTRL+H)
    The third is source control where every change gets staged/un-staged and committed/Undone (Uncommitted).

    Highlighted icons with different colours that are useful in our daily base task.
    The Green one is used to change the toggle view mode. The Blue one tick mark is used to commit changes. The Light blue is used to re-fresh things. The Yellow is used to discard changes. The White with plus symbol is used to stage all changes.
    We can check project and repo sync status in the VS Code bar as shown in below screenshot.

    The screenshot we could see I am working in master branch and have no changes to sync (as this sync symbol has nothing). Not only this we could also see our project name is Test-Project and got no bug in it.
    We could run PowerShell along with git command in the same IDE terminal.

    If you do not want to use the graphical git operations (I have demonstrated in above screenshots with multiple squares) then you could use git commands like git clone, commit, pull, push etc. in terminal using git bash in background. But for this we have to make sure that Git bash is installed in our system.
    This is the end of today’s post.
    Thanks to Gourav for this clear overview on how to get started to connect these things. At TopQore we are also using Azure DevOps with VSCode and it works nicely. Good luck with all your projects!
    Azure devops working directory

    Azure devops working directory
    [youtube]
    Azure devops working directory Current news events Azure devops working directory
    Azure devops working directory
    In this post we show you how to download and install Visual Studio Code and how to configure it to connect to Azure DevOps projects (Git).
    Azure devops working directory
    Azure devops working directory Azure devops working directory Azure devops working directory
    SOURCE: Azure devops working directory Azure devops working directory Azure devops working directory
    #tags#[replace: -,-Azure devops working directory] Azure devops working directory#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]local news[/url]

    3 years ago Reply

  13. We help you stay in comfort while you take care of your business
    Lancasterjala

    [b]Эдуард Кабринский – Vsts branching strategy – Kabrinskiy Eduard

    Vsts branching strategy
    [youtube]
    Vsts branching strategy News new Vsts branching strategy
    Choose the right Git branching strategy
    Using Git for source control is now an industry standard, but how do you know which workflow is best? Lorna Mitchell explores the options.
    Git is a fantastic tool. Source control tools provide several vital features: a place to keep code, a history of changes, and information about who did what and when. If you and your team are good with commit messages, you may even get bonus info about why a change was made. The distributed tools such as Git and Mercurial also offer many possibilities for collaboration between multiple repositories, as well as providing effective ways of working with branches.

    So what exactly is a branch in Git? Take a look at Figure 1. A branch in Git is simply a pointer to a commit. If you look at how a branch is represented in the ‘.git’ directory, you’ll find a textfile with the same name as the branch, simply containing the commit identifier of the current ‘tip’, or newest commit, on that branch. The branch itself is created by tracing the ancestry of that single commit, since every commit knows which commit it occurred after.
    As a trainer, one thing I am always asked about is what the ‘right’ Git branching strategy is. My advice varies quite a lot depending on the team and situation, but there are some common patterns that show up time and time again. This post outlines those patterns and explains the situations where they can be most helpful.
    Start simple with GitHub flow
    Made famous by a blog post from Scott Chacon, GitHub Flow is my favourite branching strategy. Why? Mostly because it’s simple while still covering all the essential bases. GitHub Flow simply states that when working on any feature or bug fix, you should create a branch.

    When it is finished, it is merged back into the master with a merge commit. It’s a very easy way for teams and individuals to create and collaborate on separate features safely, and to be able to deliver them when they are done. It creates a graph that looks something like the one in Figure 2.
    These ‘topic’ branches can be quite long-lived, but be aware that the more a branch diverges from the master, the more likely it is that conflicts will be experienced when the feature lands. The rule is that the master branch is always deployable, so features must be finished and thoroughly tested before they are merged in. After a new feature has been merged to the master, it is normal to immediately deploy the new codebase. For this, you could either establish a true continuous deployment approach, or just set up an easy automated deployment process so the code can be deployed often.
    There’s a great article on Laura Thomson’s blog that describes this as ‘potentially continuous deployment’. It isn’t considered unusual to deploy code to live platforms multiple times per day.

    Since this model advocates branching for even a ‘hotfix’ – a quick, single-commit change – it creates a graph pattern all of its own. Figure 3 represents an example where there’s just a one-commit diversion on the direction of development.
    Why would someone create a single commit on a branch and then merge it with a merge commit, rather than just applying the change directly to master? There are some philosophical reasons about separation, but for me there are two main advantages.
    Firstly, creating a branch gives the opportunity to open a pull request and get some review/feedback/QA on the change before it merges. Secondly, if you want to either unmerge the change or apply the same change elsewhere, both those things are easier with a branch than with a direct-on-master commit.
    Environment branches

    This is what I like to call the ‘Branch per Platform’ model. In addition to a master branch, there are branches that track the live platform, plus any other platforms you use in your workflow (such as test, staging and integration). To do this, the master remains at the bleeding edge of your project, but as features complete, you merge to the other platforms accordingly (see Figure 4).
    The advantage of this is that you always have a branch that tracks a particular platform, which makes it very easy to hot-fix a problem if, for example, it is spotted on live. By using the live branch as the basis of your fix, you can easily deploy the existing codebase, plus one single change. Once the live platform is sorted out, you can then apply the fix to the master and any other environment branches. This means you avoid a situation where you’re either trying to guess what is on the live platform, or doing a weird direct-on-live fix!
    Marking with tags
    A very similar alternative is to use tags to mark your releases when you make them. This way, you can look back at the history of the versions that have been live. In fact, GitHub even has a dedicated page for this. This technique is used especially to mark library releases.
    Other tools, such as Composer for PHP, will use the tags to pick up and make available the newly released version, so it’s important that the tags are used correctly. Tags can also be useful in your own projects, perhaps configuring your build or deployment tools to only respond to specifically named tags rather than every commit on a branch, or indeed a whole repository.
    Merge commit or not?
    Using Git gives us the ability to merge changes in any order, regardless of how those features were actually built and combined. It also enables us to rewrite history, leading to endless debates over the benefits of a clean and pretty commit graph, versus one that reflects what actually happened.
    A bit like the argument between tabs and spaces, this argument can and will run for some time. That said, both sides of the discussion have merit and it’s an important consideration when you standardise the way that your team will work.
    For any of this to make sense, first we need to talk about merge commits in Git. All Git commits have commit identifiers – those long hex strings – and they are unique. They are made from information about the changeset, the commit message, the author, the timestamp and the parent commit that they were applied to.
    This means if the same commit is applied to a different parent, even if the resulting code ends up identical, the new commit will have a different commit identifier. Merge commits have not one, but two parents.
    In the log, they look like this:
    Notice the extra line in the commit message which starts Merge: . The two numbers are the commit references of the two parents for this commit – one from each of the branches being merged together.
    Fast-forward merges

    Merges don’t always look like this, however. If there are changes on a branch but not, for example, on the master branch, then a ‘fast-forward merge’ will take place. We start with this situation: changes on a feature branch but not on master (as shown in Figure 5). If we just ask Git to merge this branch, it will ‘fast-forward’.

    When the merge happens, there’s no merging actually needed. The commits on the feature branch continue on from the newest commit on the master branch, making a linear history. Git therefore fast-forwards, moving the master label to the tip of the feature branch so we can continue from there (Figure 6). If you are aiming for a more traditional merge pattern, you can force this using the –no-ff switch. This creates a history that looks like Figure 7.

    The –no-ff switch tells Git not to fast-forward but instead to create the merge commit. Many branching strategies require that this technique be used when branches merge in. Most of the time, we’ll need a merge commit anyway, because there will have been changes on both the feature branch and the branch we’re merging into. In some situations – for me that’s usually when creating a hotfix branch – it can be useful to force the merge commit to make the history clear on exactly which branch has been merged to where.
    Branching strategies
    Perhaps the most well-known branching strategy is Git Flow, which is a very comprehensive strategy. So comprehensive, in fact, it needs a whole set of scripts in order to use it properly! In my experience, Git Flow is too much for all but very large and technically advanced teams that are solving problems across multiple releases at one time.
    However, every branching strategy I’ve ever implemented has drawn ideas from this approach, and I’ve tried to break these down in this article.
    A branching strategy itself is a process; a document. It’s the place where, as a team, you capture your approach and attitude to the way that you produce, track and ship code. Everyone understands how each of the moving parts in the process works (at least on the code level – social and political issues are a topic for another article), and how to collaborate to achieve the goal that you’re all aiming for. A branching strategy can be a simple as a page on the company wiki – something that gives structure to the way that you work. It may also be a great place to put the Git cheatsheet for how to do the various steps.
    I find that having a branching strategy in place greatly improves both the quality of the work and the confidence and communication of the team. Take a little time to agree and record your strategy, and then go forth and be even more awesome than you were before.
    Lorna Mitchell is an author, trainer and developer specialising in PHP, APIs and Git. You can get more help from her by reading her Git Workbook, which contains practical exercises to level up your skills. This article was originally published in issue 267 of net magazine.
    Vsts branching strategy

    Vsts branching strategy
    [youtube]
    Vsts branching strategy American news headlines Vsts branching strategy
    Vsts branching strategy
    Git is a fantastic tool. Source control tools provide several vital features: a place to keep code, a history of changes, and information about who did what and when. If you and your team are good with commit messages, you may even get bonus info about why a change was made. The distributed tools such as Git and Mercurial also offer many possibilities for collaboration between multiple repositories, as well as providing effective ways of working with branches.
    Vsts branching strategy
    Vsts branching strategy Vsts branching strategy Vsts branching strategy
    SOURCE: Vsts branching strategy Vsts branching strategy Vsts branching strategy
    #tags#[replace: -,-Vsts branching strategy] Vsts branching strategy#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  14. We help you stay in comfort while you take care of your business
    Michaeljala

    [b]Эдуард Кабринский – Azure devops process – Кабринский Эдуард

    Azure devops process
    [youtube]
    Azure devops process America latest news Azure devops process
    Azure DevOps Processes Part 3: Overview of the CMMI Process
    I have been slowly working my way through detailing the standard processes available within Azure DevOps. In my previous post, I provided an overview of the Basic process – the simplest process available. This post is fully dedicated to the CMMI process.
    The CMMI ( Capability Maturity Model Index) process closely aligns to the waterfall methodology. The CMMI process within Azure DevOps is based on a guide published by the Software Engineering Institute called CMMI for Development: Guidelines for Process Integration and Product Improvement (SEI Series in Software Engineering).
    The below image displays the typical stages and outputs within a standard waterfall project. As you read further into the blog, you will notice that the stages are similar to the states available within key work items in the CMMI process and that the terminology used to describe work items are also a close match to the standard waterfall methodology.
    Standard Waterfall Methodology Proces
    The work items available within the CMMI process are shown in the image below and described further on:
    Epics: Epics within the CMMI process are the same as Epics within the Basic process. Epics contain the high level process of what is to be delivered. An example of this would be the delivery of different channels e.g phone or chat within a call centre.
    Features: Epics contain one or more features. Features allows the definition of different focus points which must be delivered in order for the epic to be delivered. E.g telephony integration and call verification features would be required in order to deliver an epic related to the delivery of a telephone channel in a call centre.
    Requirements: Requirements describe the needs of the business and also defines what is required for the feature to be completed. Requirements can be associated to one or more Features. It should be noted that a requirement is not a feature and should contain enough information for both testers and developers to understand the business need and what the system is expected to do to satisfy the need. Requirements which define how the system will satisfy the need reduces the flexibility of designing the system and may create issues later in the project as technically and strictly speaking, changing how the requirement is implemented would mean that a change request should be generated. An example of a requirement that would be associated to the call verification feature above would be that the customer must validate three pieces of information which could contain their name, address, date of birth or credit limit.
    Tasks: Tasks provide a breakdown of what needs to be completed in order to complete a requirement. An example of a task which could be associated with the above requirement could be to create the credit limit field on the contact record. (Assuming you’re using a CDS database the name, address and date of birth fields should already exist).
    Change Requests: Deviations from the agreed scope, should be logged as a change request. The CMMI process provides a work item which can be associated to the original requirement/s or feature/s being affected by the change and also allows the impact of the change to be documented. The below image shows an example of the change request work item.
    Review: The review work item allows users to document meetings within Azure DevOps. These could be review meetings, design meetings or even team meetings. Attendees who are also Azure DevOps users can be selected and other work items which was discussed within the meeting associated for reference. This work item is only available by default with the CMMI process template.
    Issue: The issue work item available within the CMMI process template relates to project related problems and is not the same issue work item used within the Basic process template.
    The CMMI issue work item allows users to describe the issue as well as detail the corrective actions needed to resolve the issue.
    Risk: No project is without risks. Within the CMMI process, these risks can be logged and tracked associated if necessary to the work items it relates to. Within the risk work item users are able to describe the risk, specify the probability as well as make contingency plans that allows everyone (with access) to know what to do in the event that this risk becomes an issue.
    Each of these work items have defined standard fields. However, there are also fields available which may not be on the form. The list of available fields which are available within the CMMI process template can be accessed via Microsoft’s Work Item field index page. Additional information related to specified work items can also be found on the:
    There are some fields which tend to be under the radar but which are really useful during a project if used correctly. These are:
    Activity: The activity field is available on the Task work item and dictates the type of work being completed. E.g. the activity could relate to design, testing, documentation or development. This field ties closely to assessing the capacity of team members as team members can be assigned an amount of time for each activity. Populating the activity field on the task provides visibility of how much time has is needed for each type of activity. Previously, the activity field could not be modified in Azure DevOps. However, this is no longer the case. Values can be added/removed from this field without affecting the ability to view the capacity as previously described.
    Triage: The triage field is available on the Bug, Change Request, Epic, Feature, Issue, Requirement and Task work item. When the work item is in a state of proposed, this field provides directions on what needs to be done. For e.g. A list of requirements which have been created during analysis may need to be reviewed before the analysis stage is complete. This field allows the team to have visibility of which requirements are pending information or has not yet been reviewed.
    Committed: Requirements are often created in Azure DevOps before a decision is made on whether the business wishes to include them or not as this aids collaboration and allows an informed decision which can be looked back on if necessary. The committed field specifies if the requirement is within the scope of the project.
    Blocked: Blocked is a standard field available on the Bug, Change Request, Requirement, Risk and Task work item and states that no progress can be made on the work item. An issue could be created and associated to the work item at this point. However, this is a suggested business process and not enforced by Azure DevOps.
    Subject Matter Expert: Who are the subject matter experts to refer to in the business if there are questions related to a requirement? The Subject Matter Expert fields allows users to specify one or more team members to be consulted if necessary about the associated requirement. The team members specified must exist as users within Azure DevOps.
    Task Type: The task type field is available on both the task and bug work item. It allows users to track work which was planned within the project vs work which has been added retrospectively to correct or mitigate issues.
    There are two additional work item templates available within Azure DevOps. The Agile Process and the Scrum Process. In my next blog, we will delve into the Agile process and subsequently the Scrum process.
    Azure devops process

    Azure devops process
    [youtube]
    Azure devops process New newspaper Azure devops process
    Azure devops process
    I have been slowly working my way through detailing the standard processes available within Azure DevOps. In my previous post, I provided an overview of the Basic process – the simplest process available. This post is fully dedicated to the CMMI process. The CMMI ( Capability Maturity Model Index) process closely aligns to the waterfall…
    Azure devops process
    Azure devops process Azure devops process Azure devops process
    SOURCE: Azure devops process Azure devops process Azure devops process
    #tags#[replace: -,-Azure devops process] Azure devops process#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

  15. We help you stay in comfort while you take care of your business
    Portlandjala

    [b]Kabrinskiy Eduard – Azure pipelines task – Кабринский Эдуард

    Azure pipelines task
    [youtube]
    Azure pipelines task Recent news stories Azure pipelines task
    Azure pipelines task
    GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
    GitHub is where the world builds software
    Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world.
    azure-pipelines-tasks / Tasks / AzureFunctionAppV1 / README.md

    Go to file T
    Go to line L
    Copy path

    2 contributors
    Users who have contributed to this file
    Azure Function Deployment: ARM
    The Azure Function Deployment task is used to update Azure App Service to deploy Functions to Azure. The task works on cross platform Azure Pipelines agents running Windows, Linux or Mac and uses the underlying deployment technologies of RunFromPackage, Zip Deploy and Kudu REST APIs.
    The task works for ASP.NET, ASP.NET Core, PHP, Java, Python, Go and Node.js based web applications.
    Please report a problem at Developer Community Forum if you are facing problems in making this task work. You can also share feedback about the task like, what more functionality should be added to the task, what other tasks you would like to have, at the same place.
    Pre-requisites for the task
    The following pre-requisites need to be setup in the target machine(s) for the task to work properly.
    The task is used to deploy a Web project to an existing Azure Web App. The Web App should exist prior to running the task. The Web App can be created from the Azure portal and configured there. Alternatively, the Azure PowerShell task can be used to run AzureRM PowerShell scripts to provision and configure the Web App.
    The task can be used to deply Azure Functions (Windows/Linux).
    To deploy to Azure, an Azure subscription has to be linked to Team Foundation Server or to Azure Pipelines using the Services tab in the Account Administration section. Add the Azure subscription to use in the Build or Release Management definition by opening the Account Administration screen (gear icon on the top-right of the screen) and then click on the Services Tab.
    Create the ARM service endpoint, use ‘Azure Resource Manager’ endpoint type, for more details follow the steps listed in the link here.
    The task does not work with the Azure Classic service endpoint and it will not list these connections in the parameters in the task.
    Several deployment methods are available in this task. To change the deployment option, expand Additional Deployment Options and enable Select deployment method to choose from additional package-based deployment options.
    Based on the type of Azure App Service and Azure Pipelines agent, the task chooses a suitable deployment technology. The different deployment technologies used by the task are:
    By default the task tries to select the appropriate deployment technology given the input package, app service type and agent OS.

    When post deployment script is provided, use Zip Deploy
    When the App Service type is Web App on Linux App, use Zip Deploy
    If War file is provided, use War Deploy
    If Jar file is provided, use Run From Zip
    For all others, use Run From Package (via Zip Deploy)

    On non-Windows agent (for any App service type), the task relies on Kudu REST APIs to deploy the Web App.
    Works on a Windows as well as Linux automation agent when the target is a Web App on Windows or Web App on Linux (built-in source) or Function App. The task uses Kudu to copy over files to the Azure App service.
    Creates a .zip deployment package of the chosen Package or folder and deploys the file contents to the wwwroot folder of the App Service name function app in Azure. This option overwrites all existing contents in the wwwroot folder. For more information, see Zip deployment for Azure Functions.
    Creates the same deployment package as Zip Deploy. However, instead of deploying files to the wwwroot folder, the entire package is mounted by the Functions runtime. With this option, files in the wwwroot folder become read-only. For more information, see Run your Azure Functions from a package file.
    Parameters of the task
    The task is used to deploy a Web project to an existing Azure Web App or Function. The mandatory fields are highlighted with a *.
    Azure Subscription*: Select the AzureRM Subscription. If none exists, then click on the Manage link, to navigate to the Services tab in the Administrators panel. In the tab click on New Service Endpoint and select Azure Resource Manager from the dropdown.
    App Service type*: Select the Azure App Service type. The different app types supported are Function App, Web App on Windows, Web App on Linux, Web App for Containers and Azure App Service Environments
    App Service Name*: Select the name of an existing Azure App Service. Enter the name of the Web App if it was provisioned dynamically using the Azure PowerShell task and AzureRM PowerShell scripts.
    Deploy to Slot: Select the option to deploy to an existing slot other than the Production slot. Do not select this option if the Web project is being deployed to the Production slot. The Web App itself is the Production slot.
    Resource Group: Select the Azure Resource Group that contains the Azure App Service specified above. Enter the name of the Azure Resource Group if has been dynamically provisioned using Azure Resource Group Deployment task or Azure PowerShell task. This is a required parameter if the option to Deploy to Slot has been selected.
    Slot: Select the Slot to deploy the Web project to. Enter the name of the Slot if has been dynamically provisioned using Azure Resource Group Deployment task or Azure PowerShell task. This is a required parameter if the option to Deploy to Slot has been selected.
    Package or Folder*: Location of the Web App zip package or folder on the automation agent or on a UNC path accessible to the automation agent like, \\BudgetIT\Web\Deploy\Fabrikam.zip. Predefined system variables and wild cards like, $(System.DefaultWorkingDirectory)\***.zip can be also used here.
    Select deployment method: Select the option to to choose from auto, zipDeploy and runFromPackage. Deafult value is Auto-detect where the task tries to select the appropriate deployment technology given the input package, app service type and agent OS.
    Runtime Stack: Web App on Linux offers two different options to publish your application, one is Custom image deployment (Web App for Containers) and the other is App deployment with a built-in platform image (Web App on Linux). You will see this parameter only when you selected ‘Linux Web App’ in the App type selection option in the task.
    For Web Function on Linux you need to provide the following details:
    Runtime stack: Select the framework and version your web app will run on.
    Startup command: Start up command for the app. For example if you are using PM2 process manager for Nodejs then you can specify the PM2 file here.
    Application and Configuration Settings
    App settings: App settings contains name/value pairs that your web app will load on start up. Edit web app application settings by following the syntax ‘-key value’. Value containing spaces should be enclosed in double quotes.

    Example : -Port 5000 -RequestTimeout 5000 -WEBSITE_TIME_ZONE “Eastern Standard Time”

    Configuration settings: Edit web app configuration settings following the syntax -key value. Value containing spaces should be enclosed in double quotes.

    Example : -phpVersion 5.6 -linuxFxVersion: node|6.11

    Azure pipelines task

    Azure pipelines task
    [youtube]
    Azure pipelines task Today’s national news Azure pipelines task
    Azure pipelines task
    Tasks for Azure Pipelines. Contribute to microsoft/azure-pipelines-tasks development by creating an account on GitHub.
    Azure pipelines task
    Azure pipelines task Azure pipelines task Azure pipelines task
    SOURCE: Azure pipelines task Azure pipelines task Azure pipelines task
    #tags#[replace: -,-Azure pipelines task] Azure pipelines task#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]new[/url]

    3 years ago Reply

  16. We help you stay in comfort while you take care of your business
    Houstonjala

    [b]Eduard Kabrinskiy – Elk devops – Эдуард Кабринский

    Elk devops
    [youtube]
    Elk devops Current news events Elk devops
    Best of 2018: Log Monitoring and Analysis: Comparing ELK, Splunk and Graylog
    By Ritu Bhargava on December 31, 2018 6 Comments
    As we close out 2018, we at DevOps.com wanted to highlight the five most popular articles of the year. Following is the fifth in our weeklong series of the Best of 2018.

    Related Categories Application Performance Management/MonitoringBlogs

    Related Topics ELKGraylogmonitoringopen sourceSplunktools

    As organizations face outages and various security threats, monitoring an entire application platform is essential to understand the source of threat or where the outage occurred, as well as to verify events, logs and traces to understand system behavior at that point in time and take predictive and corrective actions. Monitoring the logs and logs analysis are important for any IT operations teams to identify intrusion attempts and misconfigurations, track application performance, improve customer satisfaction, strengthen security against cyberattacks, perform root cause analysis and analyze system behavior, performance, measures and metrics based on the logs analysis.
    According to a Gartner report, the market for monitoring tools and APM tools is expected to grow to $4.98 billion by 2019, and log monitoring and analytics will become a de facto part of AI Ops. Log analysis tools are emerging as a low-cost solution for monitoring both application and infrastructure (hardware and network).
    There are mix of commercial and open source tools available as the log monitoring analysis tools market has matured. Some of the key features of log monitoring tools include powerful search capabilities, real-time visibility with dashboards, historical analytics, reports, alerts notifications, thresholds and trigger alerts, measurements and metrics with graphs, application performance monitoring and profiling and tracing events. This article focuses mainly on the differences in log monitoring tools ELK, Splunk and Graylog.
    ELK, Splunk and Graylog
    Elastic has put together arguably the most popular log management platform for both open source and commercial (cloud and enterprise) log monitoring tools. The Elastic Stack—more commonly known as ELK Stack—combines Elasticsearch, Logstash and Kibana. Elasticsearch is a modern search and analytics engine based on Apache Lucene, while Logstash provides data processing and enrichment. Kibana offers logs discovery and visualization.
    Splunk is a platform for searching, analyzing and visualizing the machine-generated data gathered from the websites, applications, sensors, devices etc. covering the entire infrastructure landscape. It communicates with different log files and stores files data in the form of events into local indexes. It provides the easiest way of search capabilities and has wide array of options to collect logs from multiple sources. Splunk is available in both SaaS and enterprise editions.
    Graylog offers open source log monitoring tools providing capabilities similar to ELK and Splunk. Graylog performs centralized log monitoring; where Graylog is used for data processing and Elasticsearch, MongoDB used for search and storage. It provides log archival and drill-down of metrics and measurements.
    The below table provides the comparison analysis based on the open source/free trial editions of ELK, Splunk and Graylog.
    Top 10 Companies using log monitoring tools (ELK Stack, Splunk, Graylog). Information source collected from ELK, Splunk and Graylog websites.
    Features Comparison
    Features Elastic Stack Splunk Graylog General License/Edition Available Open source, Commercial
    Cloud: based on the cluster (memory, storage, data center region)
    Enterprise: pricing based on / instance
    Free Trail: Available for 14 days Commercial
    Free: 500MB indexing / day
    Splunk Cloud: Pricing based on ingested data to cloud / day & Free Trial of 15 days with 5GB data
    available for search & analysis
    Splunk Enterprise: Pricing based on ingested data
    Open Source: Based on daily volume, price per month
    Enterprise: Free for Under 5 GB/day Implementation Language & Logging Format Java
    JSON C++, Python
    JSON,.CSV, Text file Java
    GLEF-Graylog Extended Log Format Community Support
    Enterprise Support Available: Well Documented
    Available 24* 7 Support
    Available: Well Documented
    Available: Direct access to the Customer Support Team Available: Well Documented
    Available : Unlimited Support from the Graylog Engineers Supported Server Platforms All OS-x, Window, Red hat Linux,OS Solaris and Windows Linux ,Ubuntu, Centos, Windows Ease of Configuration Need to set up Elasticsearch, Logstash, Kibana, beats with cluster configuration and medium complex to setup. Single platform with server and forwarder from clients, with less complexity to set up. Need to set up Graylog – Graylog web server, Elasticsearch, MongoDB and medium complex to set up.
    Basic Components Involved Elasticsearch – Search and storage
    Logstash – data processing
    Beats – logs collector and shipper Splunk Server
    Splunk Forwarder Graylog – Data processing and web interface
    Elasticsearch – log storage
    MongoDB – Configuration data storage Data Data Collection Beats, Logstash, Elasticsearch Ingest nodes
    Using app-add-ons and Splunk forwarders
    Message Inputs & Content Packs (inputs, extractors, outputs streams, dashboard configuration)
    Graylog Side car
    Data Formats Common Log files format
    Ex: nginx, tomcat logs common formats. Accepts any data type including .csv, JSON, log files , xml, etc. Common log files formats. Ex: nginx [ error_logs, access_logs]
    Syslogs, rsyslogs, GLEF format Additional Data Inputs HTTP,TCP, Scripted inputs, Syslog input HTTP, TCP, Syslog input, various Log stash plugin
    GLEF Kafka, GLEF HTTP, Beats, Message Inputs, rsyslog and syslog-ng Data Base & Database Schema Creation With Fields Elastic search – document oriented DB
    Schema defined in the specific beats plugins / log forwarders. Splunk uses built-in data structure, and stores the indexes to disk.
    Elasticsearch – to store the logs
    MongoDB – storing metadata and dead letter queue messages
    Schema defined in the specific content packs Data Correlation
    & Aggregation Done through Aggregate filter and logstash correlation event filters Done through index event correlate commands
    Decorators. Done through aggregate filters, decorators. Centralized Logging Support Available with log shippers Available with Splunk Enterprise Forwarder, Universal Forwarder Available with Logs collector Graylog sidecar, RSyslog. Data Import / Export Import / Export data from various sources like Influx DB Export / Import can be done through Splunk DB Connect to relational databases GELF output plugin
    REST API Data Transport Kafka, RabbitMQ, Redis Persistent Queue
    Processing Pipeline Components Apache Kafka, RabbitMQ (AMPQ) Data Collection Intervals Real time Stream and in Batches Real time Stream and in Batches Real time Stream and in Batches Search Search Capabilities Provides a highly scalable full-text search and analysis engine with Elastic search Offer dynamic data exploration to extract everything as it having its ow search language. Full-text based search on Real-time UDP/GELF logging with Intuitive Search interface Search Language Query DSL (Lucene) SPL (Splunk Processing Language) Very close to the Lucene Syntax Protocol Used For Read / Write operations REST API, HTTP API REST API, HTTP API HTTP API, REST API Log Filter
    Search filtering Grok Filter Plugin
    Search filter with fields levels, saved searches, graphs, etc. Filter logs applying transforms to events Uses Drools rules, extractors, fields filter
    Visualization Reporting
    Historical Data Management
    Available in X-Pack component
    Quickly generate reports of any kibana visualization or dashboard and any raw data.
    Each report is print-optimized, customizable and PDF-formatted
    Reports are created for saved search, visualization or dashboard, set up schedule report, configure the priority of schedule report with support on Historical and streaming data
    No built-in reporting capability
    REST API can be used to generate own reports on the history data and streaming data.
    Alerting Supported through X-pack / watcher configurations can be created
    Integrations available to send alerts to AIOps tools Built-in feature, support real time or scheduled alerting
    Integrations available to send alerts to AIOps tools Built-in feature
    Alerts raised based on stream
    Integrations available to send alerts to AIOps tools Monitoring Server, Device Logs Monitoring Uses Metricbeat, Filebeats for capturing server-related logs Using Splunk Insight For Infrastructure Uses Graylog Beat , NXlog Network Logs Monitoring Captures network logs through Packetbeat Uses Splunk MINT to capture network logs Uses SNMP & netflow Plugin Cloud Logs Monitoring Using Logstash, Filebeat, modules Configurable via add-ons in Splunk, for different clouds
    Plugins available to pull logs from cloud. Containers Log Monitoring Using Logstash, File Beat & Metric Beat, Logstash processors for Docker logs Splunk logging driver for Docker Uses Filebeat & Native Graylog (GELF logging driver) Integration Kubernetes Orchestration Monitoring Using Fluentd, metric beat to ingest logs from Kubernetes Using collector built for Kubernetes Using Filebeat collector sidecar DB Log Monitoring Enabled through Filebeat modules, Logstash Enabled through Splunk add-ons for different databases and Splunk DB connect Enabled through Add-on plugins and GLEF libraries for different databases including mysql, mongodb, etc. End User Transactions Monitoring, Application Performance Monitoring Enabled through APM and different beats, Logstash correlation, X-Pack components. Enabled through Splunk App for Synthetic monitoring, Splunk ITSI Modules, Splunk MINT SDK. Application logs and add-ons from marketplace. Analytics
    (Causal Analysis, Anomaly detection, etc) Supports Machine Learning via X-Pack. Splunk IT Service Intelligence Modules provides machine learning algorithms for anomalies, etc. No out-of-box machine learning support Others Customization / Extensibility Can create customized dashboards with Kibana controls and search queries.
    Custom Metricsets and Plugins
    Custom rules can be created to define thresholds and alerts
    Can extend / integrate with other tools via webhooks / plugins and programmatic access via REST HTTP/JSON API
    Hosted Elastic Stack provides out-of-box user management features, via X-Pack.
    Flexible to add/ edit new components and views to Splunk dashboards using SplunkJs stack and Javascripts.
    Can create custom metrics indexes and collect data using statsD, collectD.
    Can extend / integrate with other tools via webhooks / plugins and programmatic access via REST HTTP/JSON API with 1000+ add-ons.
    Out-of-box user management.
    Slunk Enterprise Security
    Flexible to add Custom dashboards based on stream using widgets, graphs, CLI stream dashboards.
    Can create custom index models, custom extractors with metrics needed using REST API, content packs.
    Provides few out-of-box alerting with default conditions with custom alerts defined on the streams and conditions.
    Plugins are available in Graylog, to integrate with other tools
    Provides user-management, LDAP and other authentication mechanisms (single-sign on/two-factor) integrations.
    Scalability Supports scalability(horizontal) and high availability with master / slaves cluster. Scalable with distributed deployment of Splunk Enterprise components for data input, indexing and search. Scalable with multiple nodes on ES, MongoDB, Graylog server, along with any queues (Kafka / RabbitMQ). Backup / Restore Can take snapshots backup of indexes to any external repository such as S3, Azure, etc.
    Retention on ES through Elasticsearch curator. Supports backup of configuration, indexes, warm db buckets based on policies. Archival Plugin available in Graylog Enterprise, to back up indexes and restore to new cluster, via web UI, Rest API
    Log monitoring is very essential as part of full stack monitoring, which can provide better insights for faster troubleshooting. The choice of log monitoring tools will depend on your requirements, infrastructure, price and monitoring and maintenance needs. Elasticstack, Splunk and Graylog have enterprise-ready platforms and solutions for log monitoring, and all are widely used across multiple enterprises and domains.
    Elk devops

    Elk devops
    [youtube]
    Elk devops Latest hot news Elk devops
    Elk devops
    In the maturing log monitoring analysis market, Elasticstack, Splunk and Graylog have emerged as leaders for both open source and commercial solutions.
    Elk devops
    Elk devops Elk devops Elk devops
    SOURCE: Elk devops Elk devops Elk devops
    #tags#[replace: -,-Elk devops] Elk devops#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  17. We help you stay in comfort while you take care of your business
    Rochesterjala

    [b]Кабринский Эдуард – Azure devops deploy to kubernetes – Эдуард Кабринский

    Azure devops deploy to kubernetes
    [youtube]
    Azure devops deploy to kubernetes Latest it news Azure devops deploy to kubernetes
    Azure DevOps with Azure Kubernetes Service (AKS)
    Azure Kubernetes Service is a managed service offering provided by Azure for customers to run microservices applications. AKS is the easiest way to create a fully functional Kubernetes cluster in a few minutes. The difference with AKS and on-premises or Kubernetes deployed on top of VMs are, with AKS Azure will manage the master nodes and as a customer, we have to take care of the worker nodes. More details can be found here.
    Azure DevOps use to deploy the application through CI/CD (Continuous Integration /Continues Deployment). With the use of Azure, DevOps developers can create different pipelines for production and development and with the code commit to each brunch with Azure DevOps the application is deployed to new version. Azure DevOps helps cloud administrators, developers to automate the process of application deployments.
    Prerequisites
    Create Sample project for Azure DevOps
    First we need an application to work with and deploy to AKS service. Azure DevOps team has provide a website that we can use to generate sample demo applications for different scenarios and technologies.
    Lets go to above link and create sample application to work with demo. Click Choose Template to select a template
    Create a Project
    Under DevOps select Azure Kubernetes Service

    Click Create Project to create sample project

    After creating the project this project visible under your DevOps organization.
    Create Azure Kubernates Service cluster
    we need to create a AKS cluster to deploy Kubernetes cluster to deploy the application. For this we are going to use Azure CLI.
    As a first step we need to create few variables.

    Get latest Kubernetes version avilable.

    Create Resource Group

    Create Kubernetes cluster using VM scale set and cluster autoscaler

    Create Azure Container Registry to store images build by Azure DevOps

    Connect ACR with AKS cluster

    Create SQL Server and Database need for the Application

    After the DB created login to Azure DevOps and select the project created through Azure DevOps demo generator.
    Edit variables and parameters for Build Pipeline
    Under pipelines in left menu select pipeline and click edit to do modifications to pipeline as below.

    This will show you all the tasks are associated with build process. We need to do few modifications to build task. As the first step we need to replace the build variables as below with our ACR,DB etc.

    Next, we need to change the subscriptions in each build step and select relevant resources such as ACR settings as below.

    After build variables are changed next we need to modify the release pipeline.
    Edit Relese Pipeline
    To deploy the build application to Kubernetes service under release pipeline we has to do few modifications. First we need to add release variables as below.
    To edit the release pipeline follow below steps.

    Change the hosted agent for the SQL backpack task. Make sure to select windows server agent pool for this task.

    For AKS deployment task changed the subscription and cluster details.

    Run the Build Pipeline
    When running the pipeline as below it starts to build the application and if the build it a success then trigger the release process.
    Build process status

    If the build completed successfully, it start with the release pipeline.

    Release is completed as below if there no errors prompt.

    Next step is to verify the app is deployed to Kubernetes cluster. First download the credential as below.

    Check the frontend and backend pods are created with the load balancer service.
    kubectl get svc will list down the services as below copy the public IP and check the application running on that IP.

    Before check the application, we need to enable SQL server firewall to access the Azure services as below.
    Azure devops deploy to kubernetes

    Azure devops deploy to kubernetes
    [youtube]
    Azure devops deploy to kubernetes News today Azure devops deploy to kubernetes
    Azure devops deploy to kubernetes
    | Azure | DevOps | Windows Server | PowerShell | Kubernetes | Docker
    Azure devops deploy to kubernetes
    Azure devops deploy to kubernetes Azure devops deploy to kubernetes Azure devops deploy to kubernetes
    SOURCE: Azure devops deploy to kubernetes Azure devops deploy to kubernetes Azure devops deploy to kubernetes
    #tags#[replace: -,-Azure devops deploy to kubernetes] Azure devops deploy to kubernetes#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  18. We help you stay in comfort while you take care of your business
    APARTMENTjala

    [b]Эдуард Кабринский – Devops and microservices – Кабринский Эдуард

    Devops and microservices
    [youtube]
    Devops and microservices Top news headlines Devops and microservices
    DevOps and Microservices
    What is DevOps?
    Agile software development helped in bridging the gap between business user, developer & tester by enabling more collaborating. It helped in cutting down the time to market and develop systems in an incremental fashion. To achieve this a key element is to bridge the silos in development and operations teams. DevOps helps in brining practices to automate processes that are historically manual, siloed and slow. Organization can create efficient and effective teams by adopting Agile, DevOps and Microservices.

    It is a cross-disciplinary community of practice dedicated to the study of building, evolving and operating rapidly-changing resilient systems at scale — Jez Humble

    DevOps empowers every development team to be agile and accomplish their tasks with less dependency on other teams. DevOps helps in shared responsibility between development and operations teams. One major shift the development team should adopt is Building quality into the development process. Emphasis on quality should start from planning itself. Every check-in is tested using the unit and integration testing. Unless the team follows proper white box testing the benefits of DevOps can not be seen.

    There are innumerable tools to setup DevOps, but the success comes with right culture in the team

    A development team partly owns the responsibility of releasing the features till production and monitor. It helps a developer understand the issues in deployment and need of effective logging. It enhances the efficiency of the developer. At the same time the operations team shares the systems business goals. Operations team involves in defining the application architecture and coding practices followed by the development team. This in turn helps the Operations team to be more effective in their role.
    DevOps in Microservices
    Microservices as we know are independent blocks which are stitched together to build a business process or a business application. Each microservice varies independently hence their deployment happens independently. Given this background DevOps becomes core for any microservices development team. The practices followed in DevOps help the team to quickly release the features and not impact other services which might depend on its functionality.

    Docker allows you to package an application with all of its dependencies into a standardized unit for software deployment.

    Containers technology is a game changer for Microservices. Each microservice packaged in a container becomes the single deployable component. Containers help the DevOps team to respond to spikes in network calls by scaling up or down. Container help in cutting the testing cycles. A container created and tested in QA environment gets promoted to other environments like Staging and production. In a way we are ruling out issues from building the code in every environment. All the configurations are maintained external to the container so that they can vary when the environments change.
    Phases of DevOps maturity
    Continuous integration
    It is a development practices and it requires the developers to integrate the code into a shared repository very often. There is no defined frequency, but it is expected every developer would check-in multiple times in a day. Each check-in is verified against the automation tests, allowing the teams to detect problems early. Continuous Integration is the first step down the path towards DevOps maturity.

    Best practices in Continuous integration

    Quality first : Emphasize on writing the unit and integration tests for every component. These tests should act as gatekeepers for maintaining the quality of code. Code coverage close to 100% is desirable. In Microservices architecture the coverage for all integration points (contracts) should be 100%.
    Frequent code Check-in : Team members should inculcate the habit of checking in code frequently. It helps in identifying the issues early in the development cycle
    Automate build : Build process should be automated. I am sure everyone these days follow this aspect. It is important to note the build time should be very minimal. It goes well with Microservices since each service is small and independent.
    Communicate : This is a very important aspect of DevOps and it should happen through tools. Transparent communication across the stages of code check-in, build on integration box, execution of unit & integration tests, deployment/rollback is key. Penalize the team that contributes to more build failures.
    Code smell : Measure the code quality through the linting tools available. We could use code dashboards like Sonar Qube to measure the code quality. Build automation should have a step included to push the code metrics into sonar Qube. By giving importance to code quality issues development manager can create right team culture. These tools measure the technical debt and cyclomatic complexity of the code.

    Continuous integration helps in faster feedback, increased transparency, reduces integration time, improves build quality, enables testers to focus more on business features.
    Continuous Delivery
    It is the next step in the maturity model for DevOps. Continuous integration will help to delivery effective builds and automate the testing as much as possible. To adopt continuous delivery, we should follow all the practices of continuous integration and on top of it automate the release process. The trigger to release should be manual but the actual release process is automated.

    To successfully implement continuous delivery, you need to change the culture of how an entire organization views software development efforts.

    Continuous delivery can be achieved by using tools like Jenkins, CircleCI, Bamboo. Jenkins is opensource and widely used for establishing delivery pipelines. Every change to your code goes through a complex process on its way to being released. This process involves building the software in a reliable and repeatable manner, as well as progressing the built through multiple stages of testing and deployment. Delivery pipeline is a workflow of different stages in a release process. Jenkins can be configured to take an action based on the result in each stage. Pipelines in Jenkins can be defined using declarative pipeline syntax on Jenkinsfile.

    Best practices in Continuous Delivery

    API versioning : Teams should follow API versioning. Any changes to the existing features should ensure backward compatibility.
    Feature flags : Design to place the new features behind feature flags. Continuous check-ins result in pushing incomplete features into production. Feature flags will help the product owners to enable the feature when it is ready. Product owners will have the flexibility to selectively enable the features to few early adopters and seek feedback. It helps to access the customer response on new features.
    Effective planning : Particularly in Microservices architecture the services and business application might be developed by different teams. Develop API’s first and then build the business features consuming the API’s. Planning and communication across these Scrum masters is key.

    When the team reaches to this maturity they can make a release anytime they want. Releases can be daily or weekly or what every suite the business requirements. However, if a build is ready and waiting for deployment to production then the essence is lost. Unless there is a compelling reason production deployment should happen without any delay.
    Continuous Deployment
    This is the highest level of DevOps practice. The release process goes one further step and deploys code into production without manual intervention. When a developer check-in code, it passes though the quality gatekeepers like Unit tests, integration tests, Acceptance tests. Production release happens only when the build successfully passes through the quality gates. This practice needs extreme level of maturity in the team and usually the team following this approach are experienced and self-driven.

    Branching strategy
    Distributed version control systems give team members flexibility in using the version control to share and manage code. Adopting a branching strategy so that you collaborate better and spend less time managing the version control and more time developing code.

    Feature branches
    The philosophy behind feature branch workflow is that all feature development should take place in a dedicated branch instead of master branch. It helps the developers to maintain the changes related to a feature in a branch until it is complete. It will help the master branch to be clean and ready for doing other releases. If more than one engineer works on a feature, all of them should work on the same feature branch. The other option is every developer branch out of feature branch and check in to their individual branch. Encapsulate a feature behind a feature flag.
    Feature flags add technical debt over longer period. Team should clean up feature flags once the feature is stable in production. Follow proper naming while creating the feature branch. Feature branches should be cleaned up.
    It is suggested to create the feature branch out of development branch and not master.
    Master branch
    Master branch usually contains the deployable code into production. The master branch is meant to be stable, ever push to master is thoroughly tested and passed through quality gatekeepers. Do not ever check-in code into master directly. Follow a through review of the pull requests into master. The review that takes place in a pull request is critical for improving code quality. Only merge branches through pull requests that pass your review process. Avoid merging branches to the master branch without a pull request.
    Release branches
    Some teams follow release branches. They give flexibility to stabilize the features and do any bug fixing while the next set of features are parallelly developed on other branches. Create a release branch from the master branch as you get close to your release or other milestone, such as the end of a sprint. Give this branch a clear name associating it with the release. Create branches to fix bugs from the release branch and merge them back into the release branch in a pull request. Code from release branch should be merged into Master as well as do a back merge into dev and feature branches.
    In my experience it is better to avoid the release branches as much as possible.
    Infrastructure as Code
    The idea behind Infrastructure as a code is to automate the IT and release process through code. Prevent manual intervention by driving the provision, deploy, test and decommission the compute and storage resources through code.
    Infrastructure as code uses higher-level or descriptive language to code more versatile and adaptive provisioning and deployment processes. For example, infrastructure-as-code capabilities included with Ansible, an IT management and configuration tool, can install MySQL server, verify that MySQL is running properly, create a user account and password, set up a new database and remove unneeded databases.
    Kubernetes I the most widely used orchestration platform for running and managing containers at scale. Your infrastructure is written as a code using Kubernetes. It can be versioned and easily replicated elsewhere. By versioning the infrastructure, we could look at how the infrastructure has varied over a period.

    Kubernetes groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

    Cloud native with Infrastructure as code helps to dynamically scale the infrastructure based on the user traffic. Infrastructure as code along with self-healing is a game changes in managing the production deployments.
    Devops and microservices

    Devops and microservices
    [youtube]
    Devops and microservices News page Devops and microservices
    Devops and microservices
    Details the DevOps and Microservices practices to be adopted in an organization. Hilight the different production deployment options.
    Devops and microservices
    Devops and microservices Devops and microservices Devops and microservices
    SOURCE: Devops and microservices Devops and microservices Devops and microservices
    #tags#[replace: -,-Devops and microservices] Devops and microservices#tags#[/b]
    [b]Kabrinskiy Eduard[/b]
    [url=http://remmont.com]latest news[/url]

    3 years ago Reply

  19. We help you stay in comfort while you take care of your business
    Antiochjala

    [b]Eduard Kabrinskiy – Devops online – Кабринский Эдуард

    Devops online
    [youtube]
    Devops online World news Devops online
    DevOps tutorial—an introduction
    There are many different stages, concepts, and components in DevOps, and this DevOps tutorial is a great way to learn what DevOps is and how it can help improve your software delivery process.
    Understanding DevOps
    To begin this DevOps tutorial, we’ll introduce some basic definitions to help you understand what DevOps is and how it relates to your overall software development process.

    What is DevOps?

    DevOps is a software development practice that promotes collaboration between development and operations, resulting in faster and more reliable software delivery. Commonly referred to as a culture, DevOps connects people, process, and technology to deliver continuous value.
    The software development process can be a highly manual process, resulting in a significant number of code errors. Development and operations teams can often be out of sync, which can slow software delivery and disappoint business stakeholders. DevOps creates efficiency across all tasks involved in the development, deployment, and maintenance of software.
    Connecting development and operations leads to increased visibility, more accurate requirements, improved communication, and faster time to market.
    DevOps bridges the gap between development and operations, creating significant efficiencies across the development and deployment of software. DevOps includes a strong emphasis on automation, helping reduce the overall number of errors.
    The philosophy of DevOps is to take end-to-end responsibility across all aspects of the project. Unlike more traditional methods of developing software, DevOps bridges the gap between development and operations teams—something that is often missing and can heavily impede the process of software delivery.
    Providing a comprehensive framework to develop and release software, DevOps connects development and operations teams—a gap that can create challenges and inefficiencies in software delivery.
    Although both DevOps and agile are software development practices, they each have a slightly different focus. DevOps is a culture that focuses on creating efficiency for all stakeholders involved in the development, deployment, and maintenance of software. Agile is a lean manufacturing process that helps provide a software development production framework. Agile is often specific to the development team, where the scope of DevOps extends to all stakeholders involved in the production and maintenance of software. DevOps and agile can be used together to create a highly efficient software development environment.
    Deepen your DevOps learning with these tasks
    Each section in this DevOps tutorial includes a few tasks to help you take steps toward building a DevOps practice. Take a moment to answer the following questions:

    What are some of the challenges you’ve faced in the development and deployment of software?
    In what areas would you like to see improvements in efficiency as you develop and deploy software?

    The fundamentals of a DevOps practice
    Next in this DevOps handbook is to gain an understanding of the main concepts used in a DevOps practice. This section will help explain and clarify the main components.

    Agile planning and lean project management

    Commonly used in software teams, agile development is a delivery approach that relates to lean manufacturing. The development is completed in short, incremental sprints. Although it is different than DevOps, the two approaches are not mutually exclusive—agile practices and tools can help drive efficiencies within the development team, contributing to the overall DevOps culture.
    With a team working together, version control is a crucial part of accurate, efficient software development. A version control system—such as Git—takes a snapshot of your files, letting you permanently go back to any version at any time. With a version control system, you can be confident you won’t run into conflicts with the changes you’re working on.
    Continuous integration is the process of automating builds and testing that occur as the code is completed and committed to the system. Once the code is committed, it follows an automated process that provides validation—and then commits only tested and validated code into the main source code, which is often referred to as the master branch, main, or trunk. Continuous integration automates this process, which leads to significant efficiencies. Any bugs are identified early on, prior to merging any code with the master branch.
    Continuous delivery is the fundamental practice that occurs within DevOps enabling the delivery of fast, reliable software. While the process is similar to the overarching concept of DevOps, continuous delivery is the framework where every component of code is tested, validated, and committed as they are completed, resulting in the ability to deliver software at any time. Continuous integration is a process that is a component of continuous delivery.
    Whether on premises or in the cloud, the provisioning and configuration of resources is a key part of environment operations. Through process automation and the use of tools that provide a declarative definition of infrastructure—for example, text-based definition files—teams can deploy and configure resources in a reliable, repeatable way. The text-based definition files can be managed as code with version control, allowing for easy rollback, re-creation, and teardown of complex environments. Technologies such as Terraform or Ansible are common solutions for the implementation of infrastructure as code.
    The scope of DevOps goes beyond development, maintaining responsibility for the software through delivery, including software performance. The entire process of DevOps creates a feedback loop, ultimately providing data points that can both help improve a future project and validate the decision to deploy the software. Monitoring and logging are key components that support validated learning, which then supports the overall initiative to consistently strive toward greater efficiency in the software development and delivery process.
    Deepen your DevOps learning with these tasks
    Now that you’ve learned the key concepts in a DevOps practice, take a moment to answer the following questions:

    What parts of your development process are manual and could benefit from automation?
    Are there opportunities to introduce continuous integration within your team’s build process?
    How is your team managing infrastructure today? Is the process repeatable and reliable or could the process be improved using infrastructure as code?
    What telemetry data would help inform your work?
    What other data points would support your validated learning?

    Building your DevOps culture
    The next part of this DevOps tutorial is discussing how to build a DevOps culture. As you prepare to bring DevOps into your business, you’ll likely encounter differences from your current approach to software delivery.
    Building a new culture doesn’t happen overnight and isn’t as simple as buying a new set of tools. To enable your team to learn and practice DevOps, you may need to make changes to your current team structures, workflows, and habits.
    Deepen your DevOps learning with these tasks
    As you consider building a DevOps culture in your business, take a moment to answer the following questions:

    DevOps helps coordinate all stakeholders in the development, delivery, and maintenance of software: In your organization, who will this include? Create a list of all the individuals who will be working together.
    What ways can you increase communication between development and operations? Brainstorm a few ways to improve stakeholder cooperation.

    Additional DevOps tutorials
    If you want to explore DevOps further, take a deeper dive by trying some of these DevOps tutorials.
    Devops online

    Devops online
    [youtube]
    Devops online News page Devops online
    Devops online
    Take this DevOps tutorial to learn how to bring the DevOps culture into your business for faster and more reliable software delivery.
    Devops online
    Devops online Devops online Devops online
    SOURCE: Devops online Devops online Devops online
    #tags#[replace: -,-Devops online] Devops online#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  20. We help you stay in comfort while you take care of your business
    Inglewoodjala

    [b]Eduard Kabrinskiy – Ado azure devops – Кабринский Эдуард

    Ado azure devops
    [youtube]
    Ado azure devops Live news Ado azure devops
    Deploying AWS Infrastructure…with Terraform…using Azure DevOps?
    Welcome to my first mash-up of cloud providers and tools post! I guess this can be considered some new age “multi-cloud” stuff, but my first venture into the world of Azure DevOps opened my eyes to how powerful and surprisingly accessible this tool can be. Let me frame that comment with some background.
    More Dev and Less Ops
    I cut my teeth in the world of virtualization, storage and converged infrastructure. A while back I started digging into AWS and realized (like many others certainly have by this point) that cloud wasn’t just some buzz word that would fade away in a few years. Even if my customers at the time only talking about cloud, I knew that building up my cloud skills would be valuable. Fast forward to the present where I’m on a cloud focused team and customers can’t seem to get enough cloud. If it wasn’t already evident a few years ago, the Dev half of DevOps is becoming a very important skill set for those of us who built careers on the Ops side of the fence.
    I long time ago, I actually got my Bachelor’s degree in Computer Science. It feels like a lifetime now, but I once did a lot of C++ object oriented programming and actually passed classes (even while seemingly getting in my own way) like Operating Systems and Computer Graphics. While picking that side up things up again isn’t necessarily quite like riding a bike, I’ve always known that foundation would help with my immediate goals of flexing that Dev part of my brain and building the skills required by this newer IT landscape.
    Getting started is the hardest part
    I listened to a couple Day Two Cloud podcasts fairly recently, one titled “Making the Transition to Cloud and DevOps” and “Building your first CI/CD Pipeline.” These conversations hit home in that while I understood these concepts at a high level, I really needed to get my hands dirty and build something. One thing that stuck with me since my old Comp Sci days is that simply finding a way to start is the hardest hill to climb. It’s even true today for things like writing a blog post, but once I get started, it is much easier to plow through whatever is in front of me.
    My starting point for using Azure DevOps (ADO) presented itself by way of a customer project. Being forced to learn something that comes with a deadline is sometimes the best way to reach that starting point. This project required an environment to be built within Azure, where the customer already had some infrastructure and started developing new apps. There was also an ask to have a similar environment built in AWS, even though it wasn’t widely used at the moment. Terraform made perfect sense in this case, as we could deliver Infrastructure as Code (IaC) using one platform, rather than using both Azure Resource Manager and Cloud Formation.
    Azure DevOps from the ground up
    It was proposed that Azure DevOps would be used to store and deploy the Terraform code for this project. Thankfully, I was working another team member who had a lot of ADO experience. When it comes to learning new tech, it is much easier having someone step you through the process rather than starting from scratch. ADO is a very powerful tool that can do many different things, which was fairly intimidating to me. One hour of knowledge transfer with someone to step through the basics was enough to spark my interest and show me that it was actually within my grasp.
    I took a more passive role for the deployment of a landing zone into Azure, but soaked in as much as I could during that process. That gave me enough confidence to volunteer for the AWS portion of the deliverable. Since we had already set up ADO to deliver an Azure environment, all I had to do was “AWS-itize” the Terraform code, right? Kinda, sorta…
    After translating the basic Azure infrastructure design over to AWS, I also translated the Terraform code. That was the easy part of the process. Once that was complete, I began to dig into the ADO config and noticed that there was wiggle room to build upon it and learn something new.
    Repositories, Pipelines and Workspaces, oh my!
    For those as unfamiliar with ADO as I was, I want to highlight some of the basics. ADO is Microsoft’s end-to-end DevOps tool chain as a service. Yes it has Azure in the name, but it is a SaaS platform that is separate from Azure Public Cloud. ADO is cloud and platform agnostic, and can integrate or connect to just about anything under the sun. ADO’s major components are:

    Azure Pipelines – platform agnostic CI/CD pipeline
    Azure Boards – work tracking, reporting and visualization
    Azure Artifacts – package management for Maven, npm and NuGet package feeds
    Azure Repos – ADO private hosted git repositories
    Azure Test Plans – integrated planned and exploratory testing solution

    I focused solely on Azure Repos and Azure Pipelines for this project. Azure Repos is fairly simple, as it is just ADOs version of a Git repo. It is functionally the same as anything else that uses Git, so if you are familiar with GitHub, you know how to use it.

    It is nice to be able to store code in the same platform that runs your CI/CD pipeline, but a pipeline can still connect to GitHub, among others if you so desire. I also like the ease of cloning directly into VS Code from Azure Repos. It makes pushing changes up to Azure Repos a breeze.

    The biggest learning curve for me was developing the Azure Pipeline. Pipelines in ADO use a YAML file to generally define the tasks a pipeline will perform. YAML is pretty common in IaC and the world of cloud DevOps, so the biggest hurdle is really understanding the ADO pipeline components. Thankfully I had a working pipeline to start with, and plenty of Microsoft documentation to help fill the gaps.

    One of the really cool things about Azure Pipelines is the ability to use a Microsoft hosted agent to perform tasks for you. By defining an agent pool (Linux, Windows or Mac), your pipeline can spin up a VM on the fly for you to do the actual work.
    The meat of a pipeline (at least the pipeline I created) is defining jobs for the pipeline to run. In this case I only had one job, which was to create AWS infrastructure using Terraform. That job consisted of a number of tasks required to complete that job. Thinking of it like a workflow made it easier to understand how to best configure the tasks. Pipeline tasks can be chosen from a wide variety of options that are pre-defined within ADO or custom built programmatically.
    The initial Azure version of this pipeline used a Linux agent VM to install Terraform, then install Azure CLI. We used Azure as a backend for the Terraform state, so the next tasks were simply bash scripts that used Azure CLI to login to the proper Azure environment, create a Resource Group, Storage Account and Container and configure the Terraform backend.
    While I had done a bit of Terraform on my own, it was never for anything that was shared by an organization. The backend config was new to me, but makes a ton of sense when collaboration is taken into account. We also utilized a Terraform workspace so that this code could be re-used for multiple hubs/spokes across different regions. Diving into the collaboration side of Terraform helped tie some of the IaC CI/CD concepts together for me. Once the backend was configured, all that was left were quick Terraform init, validate, plan and apply tasks to get the ball rolling.
    From a Terraform perspective, the switchover to deploying AWS stuff was super simple. Even though I was using an Azure tool, all it took was a change of the Terraform provider and some access keys to create infrastructure directly into AWS!
    Learning some new tricks
    I noticed that the task to install Azure CLI on the agent VM took 45-60 seconds to complete, so I dug into the built-in ADO tasks and saw that I could simply use -task: AzureCLI@2 within the pipeline and not need to wait a whole minute(!) for the CLI install to take place…thanks MS documentation!
    Another cool part of ADO is the ability store environment variables within the pipeline itself. This is an easy way to keep things like secrets and access keys both safe and readily available to use within the pipeline. One interesting rabbit hole I went down is that I wanted to add conditionals to the scripts that created the Terraform backend: check if Azure Storage Accounts, etc. exist before actually creating them. I’m not very familiar with bash scripting and for the life of me could not figure out how to get these conditionals to work. I pivoted to using PowerShell instead of bash, which is supported by the AzureCLI task and was more familiar to me. It also resulted in learning that environment variables have different syntax across different systems!

    Once I got the PowerShell working, I also used the above link to figure out how to pass a variable between tasks. The script to create the backend had to grab a secret key from the Azure Storage Account, and that variable was required in a later task for the Terraform init -backend-config command. Since I hadn’t installed the Azure CLI and wasn’t defining that variable directly within the agent anymore, I had to first define the var as an empty string in the agent VM and use a write-output(“##vso[task.setvariable variable=KEY]$KEY”) command to have the PowerShell script task pass the value it grabbed back out to the pipeline itself. Google to the rescue again!
    The last cool thing I figured out was how to use parameters within the pipeline. In this case, once a Terraform apply built the stuff, what happens next? Well if you are testing and running it over a number of times to validate, you need to throw a Terraform destroy in for good measure. I found out that you can add parameters that essentially give someone running the pipeline the ability to inject parameter values. In this case a simple boolean check box for running either apply or destroy allowed me to add conditionals to define which Terraform command the pipeline would run at the end.

    virtualBonzo’s take
    Azure DevOps is an extremely powerful and far reaching tool. My use case and initial demo with Terraform/AWS only begins to scratch the surface of what is possible with ADO. Regardless, it was one little victory for me. There are also many other tools and possibly more effective ways to do what I did in this case. I was very lucky (and thankful) to be given something to work with, and that kick in the pants is exactly what I needed to get my hands dirty and learn something new. I am sure there will be more to come in this space as I continue explore the world of DevOps. All it takes is hours worth of effort and a bunch of green check boxes to get excited, only to realize there is still so much more out there to learn…
    Ado azure devops

    Ado azure devops
    [youtube]
    Ado azure devops Latest news headlines today Ado azure devops
    Ado azure devops
    Deploying AWS Infrastructure…with Terraform…using Azure DevOps? Welcome to my first mash-up of cloud providers and tools post! I guess this can be considered some new age “multi-cloud” stuff, but
    Ado azure devops
    Ado azure devops Ado azure devops Ado azure devops
    SOURCE: Ado azure devops Ado azure devops Ado azure devops
    #tags#[replace: -,-Ado azure devops] Ado azure devops#tags#[/b]
    [b]Kabrinskiy Eduard[/b]
    [url=http://remmont.com]daily news[/url]

    3 years ago Reply

  21. We help you stay in comfort while you take care of your business
    MiamiGardensjala

    [b]Эдуард Кабринский – Vsts backlog – Eduard Kabrinskiy

    Vsts backlog
    [youtube]
    Vsts backlog Current news Vsts backlog
    Starting Product Backlog Management with Azure DevOps (formerly VSTS)

    I have worked with TFS, VSTS and now Azure DevOps for several years now on projects of various delivery approaches, size and setup. I often received questions about how to setup Azure DevOps for a single Scrum Team (various level of maturity) or in a multi-team environment.
    So here I am with a first post around simple setup and my recommendations based on my personal experience!
    Introduction
    While Microsoft has added a tremendous number of new features on the ALM suite, it is pretty easy to get along with the latest version – named Azure DevOps (formerly Visual Studio Team Services).
    It is now composed a full set of features to cover the Application Lifecycle Management of a software product from Product Backlog Management, Automated Builds, Deployment Pipeline, Automated Releases, Testing and more.
    You can find a lot of existing material on Microsoft website:
    Microsoft introduces new features on a 3-week release cycle for the online tool with lots of good features popping up in beta or release version!
    Let’s get started!
    Assuming that you already have a Azure DevOps account, you can create a new project and fill in the project details with within minutes you have a new Project space (or “Product space” we should really say for an Agile delivery).
    If you are totally new to Azure DevOps you can create a new organisation account here.
    Sorry for the links, I am not going to repeat what has been already covered…
    Now, the interesting part is that often the organisation account is managed by your global IT team which for compliance and security reason tends to keep things closed
    My first recommendation here is ask your internal IT team to create a new project space for you with the details you want and to add your account to the newly created project as Project Administrator!
    This will allow you to configure your project space as you need and also be able to create all your repos, builds and releases in the future… and avoid countless back and forth request with your IT Team.
    Project details

    Project name:
    Visibility: Private (default)
    Version control: Git (default)
    Process Temple: Scrum (default)

    Personaly, I have used the default settings on all projects I have worked on in the past couple of years and this suits me well for Scrum Teams. However, I have also helped teams using the Agile process template in a very similar way. I haven’t used CMMI for years.
    Warning:
    The “project” terminology here is a bit confusing! If you are used to work on an Agile environment, we tend to talk about Product. Here the project is a workspace to manage your Product with one Product Backlog that can be shared across multiple teams and with potentially several project initiatives in the lifetime of your Product!
    Through this blog series, my aim is to give better view of how to set this up…
    Boards configuration
    So now, you have a brand new Azure DevOps project ready to be setup. For the purpose of the blog series I have created the ADO-Demo project as below:

    Project configuration
    The first step, we are going to look at the Boards configuration, so click on the Project settings on the bottom-left panel and navigate to Project configuration in the Boards section.
    The Project Configuration allows to set the Iterations and Areas for your Product Backlog. You will use both fields with your Product Backlog Items (PBIs). They are directly used by the various pages to filter your backlog.
    Iterations
    The Iteration field defines your “delivery cycle”. By default you will get something close to this:

    This is a good start, but you can also create sub-levels if you want to have releases, versions or even projects (for more hybrid-agile environment) as below:
    Note that the display order is alphabetical until you set the dates of your iterations.
    This is the main difference between Project and Product! You can easily manage the development projects of your Product within the same Product Backlog and Azure DevOps project space. Again, think about it as your Product timeline.
    Areas
    The Area field is another way to categorise your Product Backlog. Each PBI has an area path that you can set. It is commonly used for features or technology layers in single team environment. For multi-team environment it allows to split the backlog for each team, but I will write another blog post on this soon…

    From the Project configuration again, you can set you areas. My personal preference is to keep things simple and start with domain features like in the example below:

    Note the area path is a hierarchical structure so it can be used for more complex setup if you wish.
    A great alternative of areas is the use of Tags! Tags can easily complement areas as they are more flexible to set and can be still used for filtering and queries. However, they cannot be used to define your team setup… I will talk more about the Tags when looking at the backlog!
    Team configuration
    Alright, now that we have set the areas and iterations for the project (or product!), let’s have a look at the Team configuration.

    Note that when you create a new Azure DevOps project space, a default team is automatically created with the name of the project (ADO-Demo Team in my example).
    Lot is already available on the links I provided, so the important parts are:

    How do want to organise you backlog? PBIs, Features + PBIs or Epics + Features + PBIs
    How do want to work with bugs? Bugs like PBIs or like Tasks?

    There is nothing black and white here, it’s a lot about your context, level of maturity of your team with Scrum and obviously your personal preferences ??
    Warning:
    Another common confusion (or topic of lengthly discussion) is the terminology used for Epics, Features and PBIs (aka backlog items). Let say another product (JIRA) has only 2: Epics and Stories, and it is sometimes difficult for people to embrace change – it is only a way to organise your backlog. Microsoft has followed the SAFe approach here! (no comment!)
    Based on my personal experience starting with Features + PBIs and Bugs at the same level as the PBIs (default setup) work well for Scrum Teams. This allows to decompose user scenarios (Features) into smaller stories (PBIs) and deal with in-sprint and out-of-sprint defects in your backlog.
    I have also used Epics for larger cases where we wanted to manage Product initiatives or Portfolio Management.
    Iterations
    Now let’s look at how to define the iterations for the team!
    Since Azure DevOps backlog and board views are filtered based on the iteration and area fields, it is important to set them properly or you won’t see your backlog items

    With the default configuration, all iterations are assigned to the default team. If you have changed the iterations in the Project configuration, it’s better to review the assignment in the Team configuration.
    Let’s say we work on a 3 Sprint initiative, I can remove the Sprint 4 to 6 and hide them from the Team views, without physically deleting them from the Project configuration. Still following? ??

    You can add (or re-add) additional Sprints (iterations) by clicking on Select iteration(s) as below:

    Note the new addition where you can add several iterations in one go with the +Iteration button. Still a bit cumbersome, but at least we don’t do that too often ??
    Areas
    Same story with the areas, you need to assign them to your team. So, if you have added sub-areas like I did in the Project configuration, by default they are not assigned to the team which means the backlog items set with “New customer onboarding” won’t get displayed.

    For a single team environment, the easiest is to include sub areas in the context menu or clicking on the “…”

    All backlog items from all areas will get displayed going forward – hence the confirmantion message below:

    You can exclude the sub area and manually assign the areas later if you wish.
    Boards
    Shew! The Project settings are set, we are ready to look at our Product Backlog and Kanban Board.
    Workflow
    Just before jumping in creating all your PBIs, have a think about your story workflow. Azure DevOps backlogs and boards use the backlog item State and Board Column fields.

    By default, both State and Board Column values are the same. The terminology of the Committed state hasn’t been changed, so read it as Forecasted!
    It is important to understand those few states as, again, they drive the story workflow and boards:

    New: added to the backlog, we don’t know much about it yet
    Approved: deemed valuable by the Product Owner (the Product Owner did not say “No!”) and will likely be done at some point ??
    Committed: forecasted by the Development Team in the current Sprint

    Kanban Board
    In the old TFS days, we used to fully customised the workflow states, today we tend to keep them as they are and play with the Board Column instead. It is way simpler and allows to set a story flow that fits the team on the Kanban Board.

    As mentioned earlier, initially the Board Column values reflect the State values. Here is your opportunity to change the flow for the team.
    The first thing I do is to rename the columns prefixed with an order number that will be handy for graphs and charts as they are alphabetically ordered.
    For this, click on the “gear” on the top-right corner and navigate to Board > Columns settings:

    From here, rename the columns and save the changes…
    Note that you can also set your WIP Limit, split the column into “doing” and “done”, add new columns, associate the State to the Board Column and describe the “definition of done” of each column from the settings page.

    In this example, I have renamed the columns as:

    0-Backlog: new story in the backlog, we don’t know enough yet
    1-Approved: approved by the Product Owner
    2-In Progress: forecasted by the team in the current Sprint
    3-Done: and done!

    The 2-In Progress can be decomposed a bit as everything will be “in progress” once the team has forecasted its Sprint Backlog! So let’s add a new column for the same workflow state: Committed.

    0-Backlog
    1-Approved
    2-Forecasted: forecasted by the team in the current Sprint
    3-In Progress: the team currently working on it!
    4-Done

    Ok, that’s better!
    What if we want a ready state? You can add a “ready” column once the item has been approved (with Approved state), such as:

    0-Backlog
    1-Approved
    2-Ready
    3-Forecasted: forecasted by the team in the current Sprint
    4-In Progress: the team currently working on it!
    5-Done

    Note that you should still be opened to add a PBI to your Sprint even if it is not “ready” as long as the team feels confident enough that they can forecast it for the Sprint!

    Now, we are ready to work! ??
    Product Backlog
    After adding few Features and PBIs, we are in a position to look at our Product Backlog.

    The latest UI looks “disturbingly” similar to JIRA with the Planning panel on the right! But once we get use to it, the new look & feel makes it pretty easy to forecast the Sprint and move the items around.
    There are still few things we can change to make it easier to work with.
    First, let’s review the Column options by clicking on the “…“.

    My recommendation is swap the State for the Board Column which will provide a better view of our new flow set on the Kanban Board. If you have added sub areas, it would be also useful to add the Area Path column. Finally, I tend to keep the Value Area (“Business” and “Architectural”) to potentially highlight architecture, infrastructure, NFRs type of PBIs that could end up in the backlog.

    Same idea in the Kanban Board, you can add fields to the PBI cards in the Settings.

    Another super handy feature on the Backlog is to turn on the Forecasting in the View options in the top-right corner which allows to forecast your future sprints (and release plan) without having to assign the PBIs to a Sprint in advance!

    It gives a great view to the Product Owner and everybody else about the state and progress of the Product delivery at any time.
    Vsts backlog

    Vsts backlog
    [youtube]
    Vsts backlog News headlines Vsts backlog
    Vsts backlog
    I have worked with TFS, VSTS and now Azure DevOps for several years now on projects of various delivery approaches, size and setup. I often received questions about how to setup Azure DevOps for a single Scrum Team (various level of maturity) or in a multi-team environment. So here I am with a first post…
    Vsts backlog
    Vsts backlog Vsts backlog Vsts backlog
    SOURCE: Vsts backlog Vsts backlog Vsts backlog
    #tags#[replace: -,-Vsts backlog] Vsts backlog#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]latest news[/url]

    3 years ago Reply

  22. We help you stay in comfort while you take care of your business
    scxd

    https://www.omgka.com/world 월드카지노

    3 years ago Reply

  23. We help you stay in comfort while you take care of your business
  24. We help you stay in comfort while you take care of your business
    Georgeplogs

    http://keeganhnnl222.yousher.com/soglasovaniye-ustanovki-videonablyudeniya
    http://collinkyme730.fotosdefrases.com/ustanovka-videonablyudeniya-brest
    https://www.bookmarking-fox.win/ustanovka-videonablyudeniya-minsk
    https://www.random-bookmarks.win/ustanovka-kamer-videonablyudeniya-doma
    https://www.bookmarkidea.win/ustanovka-videonablyudeniya-vitebsk
    http://preview.alturl.com/sfju6
    https://www.viki.com/users/v5jiaku848_458/about
    https://www.creativelive.com/student/eyman-mitchell-14?via=accounts-freeform_4
    https://anchor.fm/roseline-lacourse0
    https://fun-wiki.win/index.php/Ustanovka_videonablyudeniya_doma_tsena
    https://oscar-wiki.win/index.php/Ustanovka_videonablyudeniye_kupit
    https://wiki-book.win/index.php/Ustanovka_videonablyudeniya
    https://wiki-net.win/index.php/Ustanovka_videonablyudeniya_doma
    https://www.becomingadatascientist.com/learningclub/user-124282.html
    http://www.mrleffsclass.com/forum/member.php?action=profile&uid=295258
    http://treasureillustrated.com/member.php?action=profile&uid=39865
    https://www.kaset-hospital.org/kshsoft/wb/index.php?action=profile;area=forumprofile;u=75414
    https://www.asd-caprie-outdoor.it/forum/index.php?action=profile;area=forumprofile;u=33029
    https://forum.electro-lviv.com/member.php?action=profile&uid=13918
    http://dermatologist.com.ua/user/w3aijji037
    http://www.brusvyana.com.ua/user/c7cbwkg475
    https://cccp.pav.zp.ua/user/g3wvhke325
    http://sarykemer.kz/user/y4klefw266
    https://rbi01.ru/user/m7hpukc893
    http://xn--90aimpfn2c.xn--p1ai/user/f2czfby276
    http://timofeevkasp.ru/user/c2ygnjz123
    http://baikalharleytour.ru/user/e9rwtij033
    http://snt-orion.ru/user/m5aphkv556
    http://09vk.ru/user/i2bprdf446
    http://xn--80adsazcg1aju.xn--p1ai/user/r6hysih642
    https://ukrtoday.info/user/p1dfvya014
    http://ns382528.ovh.net/user/i7fsggw925
    http://englishstude-top.1gb.ru/user/k8nchzb324
    http://sp-samur.ru/user/u2vttdi252
    http://31-taraz.balabaqshasy.kz/user/p2hepsw235
    http://112.shymkent-mektebi.kz/user/v1swsmw827
    http://abai-shar.mektebi.kz/user/t1nyowk955
    http://ipxa.ru/user/k3povdl333
    https://xn—-7sbaibsrbdgjg2ajh3avggdk9e.xn--p1ai/user/m7hkptu604
    http://xn—-ctbbajdargc3azncdcjnf0av9v.xn--p1ai/user/i8pgvvp083
    http://mail.livejuronal.com/user/z3mbdcw066
    http://rybobot.ru/user/s4hzztc094
    http://p98223ug.bget.ru/user/s3khfwa862
    https://xn—-7sbabr7abmoddedvfl.xn--p1ai/user/profile/98546
    http://umkm.id/user/profile/126715
    https://belibekas.com/user/profile/29197
    https://doskastroy.ru/user/profile/28094
    https://ftabs.ru/user/profile/20667

    3 years ago Reply

  25. We help you stay in comfort while you take care of your business
  26. We help you stay in comfort while you take care of your business
  27. We help you stay in comfort while you take care of your business
  28. We help you stay in comfort while you take care of your business
  29. We help you stay in comfort while you take care of your business
    LittleRockmece

    [b]Eduard Kabrinskiy – New relic devops – Эдуард Кабринский

    New relic devops
    [youtube]
    New relic devops World breaking news New relic devops
    NEW RELIC, INC.
    Delayed Nyse – 11/23 04:10:00 pm 57.1 USD +0.39%
    New Relic : Revisiting DORA’s 2019 State of DevOps with Dr. Nicole Forsgren

    Did you know the number of elite DevOps performers has almost tripled year over year? It’s becoming more and more clear that DevOps is a key pillar of any successful modernization strategy in the enterprise. DevOps Research and Assessment (DORA) recently released the 2019 Accelerate: State of DevOps Report, and we had the chance to talk to Dr. Nicole Forsgren, DORA co-founder and Google’s lead researcher for the report, about this year’s findings. The questions we received during the live chat were so rich that we wanted to share them with you-both those answered live and the ones we didn’t have time to cover during the session.
    Don’t miss New Relic’s take on this year’s findings in the 2019 Accelerate: State of DevOps Report.
    Here are the questions from the live chat, with fresh answers from Dr. Nicole Forsgren-and a few tips from our own point of view as well:
    Q: If the elite group of DevOps performers gets too big, will you have to redefine what ‘elite’ means?
    Dr. Nicole Forsgren
    Nicole Forsgren: This is a great question. Right now, we collect questions on a log scale: that is, people are great at knowing what is happening, but in general terms, not in precise terms. People can tell you if their teams are pushing code daily versus monthly (and they won’t make a mistake about that-while mistakes in code could ‘roll up’ to errors there). But we can’t ask people for differences in 10 seconds versus 20 seconds. So what we can do is talk about trends in the industry-we can absolutely talk about how much of the industry is able to quickly develop and deliver stable code on demand. And it’s interesting … five years ago, I was getting phone calls from highly regulated companies insisting that my research was missing them, that I needed to do a State of DevOps Report for Finance, Telecom, and Healthcare. I don’t get those calls anymore. We can see that excellence is possible for everyone. That is exciting, and this level of granularity is fine for that.
    Our research continues to evolve and ask about additional practices that drive excellence and are critical to our infrastructure as well. For example, we found that only 40% of organizations are doing disaster recovery testing annually. Only 40%! So we will continue to monitor trends to see how they evolve into more advanced practices (like chaos testing) and how that impacts availability and reliability.
    Q: What’s the best way to be a DevOps advocate in an organization where DevOps is only thought of as build and deployment automation? Push for adoption of additional practices and habits one at a time?
    NF: Exactly. Start where you are and make incremental improvements. Like the old adage says, ‘Don’t let perfect be the enemy of good.’ Start by identifying your slowest service and take incremental steps to make it better. Prioritize the most critical issues and work your way down the list. And, most important, measure performance baselines both before and after you implement change so that you can demonstrate the impact of your efforts to the business. By sharing your successes you’ll be able to build internal support for your DevOps effort.
    Tori Wieldt: Agreed. And we love this tweet from Mark Imbriaco of Epic Games which sums it up perfectly:
    Tweet from Mark Imbriaco of Epic Games
    Q: How do DevOps and ITIL relate?
    NF:ITIL (IT Infrastructure Library) is a practice that we’ve seen heavily adopted for a number of years, particularly within verticals that are highly regulated. ITIL was born of the best of intentions, with the notion that having stronger processes in place would lead to better stability. But what DORA research found as early as 2014, and has reconfirmed this year, is that heavyweight approval mechanisms actually lead to more delays and instability. While it came from good intentions-one more set of eyes to make sure the changes are right!-the resulting delays lead to batching of work, an increased blast radius once those changes hit production, a higher likelihood that those changes would then result in errors, and then greater difficulty in identifying and debugging the errors that were introduced (time to restore service). That is, we’ve seen that ultimately it works much better in theory than in practice.
    We’re also seeing that some companies are adopting SAFe (Scaled Agile Framework), which is a great first step. But it’s important that organizations not just ‘land’ and stay there, but continue to evolve and improve. You cannot expect to set frameworks and guidelines that will be around for another 5-10+ years. Successful DevOps requires continual improvement. We’ll also point out that ITIL is much more than change approvals. Be sure to check out this year’s report for how change advisory boards can shift into more strategic roles (page 51).
    Q: What constitutes a failure? A bug? An outage?
    NF: A failure is defined in the report (page 18) as a change to production or release to users that results in degraded service (e.g., a service impairment or service outage) and subsequently requires remediation (e.g., a hotfix, rollback, fix forward, or patch). We generally accept the definition that a bug is an error, fault, or flaw in any program or system that causes it to behave incorrectly; and we define an outage as unplanned system downtime.
    Q: Great tools are only great if you get great adoption. What tactics do you recommend to increase adoption?
    NF: Achieving great adoption ultimately ties back to several factors: Tools need to be useful, easy to use, and provide users with the right information. (Fun fact: a lot of this ties back to pieces of my dissertation and post-doc!) Employing tools that check these boxes is important as its highly correlated with quality of continuous delivery and predictive of productivity.
    TW: Establishing Communities of Practice (COPs) within an organization can also be a great way to encourage and influence internal adoption. These could be broadly interest-based, so that participants can pick and choose which topics are relevant and useful to their job function. Then, within the group, members regularly hold trainings or ‘lunch and learns’ to share demos or walkthroughs of the tools that they are using. We have strong COPs at New Relic, and we like to think of it as ‘crowdsourcing’ tech information.
    Q: How often do you find a strong (active) programming requirement a condition of the DevOps role?
    NF: It’s certainly important. But that also depends on how you define a ‘DevOps role.’ In any case, by its nature, the approach of DevOps is intended to apply an ‘engineering lens’ to traditional operations, allowing ops work to be more repeatable and more performant, or to apply an appreciation for operations concerns like reliability and scalability to development work. Both of those require some programming skills. The importance is in the overlap and extensibility of the work.
    TW: We agree that a solid technical background is a requirement for good DevOps work. You’ll need to do some level of programming for automation. We recently compiled key questions to prep for in a DevOps interview, and many of them focus on highlighting examples of technical and collaborative projects the candidate has worked on, as well as any skills or certifications the candidate has earned. Our blog on the typical Day in the Life of an SRE, penned by our own Yonatan Schultz, also demonstrates why the need for a broad technical background is so important to success in a DevOps role.
    Q: Can you talk more about ‘pre-mortem’-why is it different from CAB?
    TW: CAB stands for ‘Change Advisory Board,’ which in some organizations is a group of two to three senior engineers who are tasked with the job of approving code for production, with the goal of minimizing risk. Here at New Relic, we previously employed a CAB but eventually found it to be too much of a bottleneck and a hurdle to agility. Instead, we’ve now implemented an automated ‘pre-flight check’ for all code changes, which allows us to move much faster.
    A pre-mortem, on the other hand, is an exercise in which your stakeholder group holds a brainstorming session about a particular project (or infrastructure or services) before it deploys and envisions all the things that could possibly go wrong with it. This is a fantastic opportunity to lay out a solid game plan and minimize the things that could go wrong well before the adrenaline kicks in on launch day.
    Check out the report and webinar now
    Thanks to all our participants for such great questions! Watch the webinar on-demand and download this year’s State of DevOps report in full. For more great DevOps content, be sure to visit out our resources page or check out our upcoming webinars.
    Attachments
    Disclaimer
    New Relic Inc. published this content on 14 October 2019 and is solely responsible for the information contained therein. Distributed by Public, unedited and unaltered, on 14 October 2019 17:40:05 UTC
    New relic devops

    New relic devops
    [youtube]
    New relic devops Current news in english New relic devops
    New relic devops
    Tweet Share Share Did you know the number of elite DevOps performers has almost tripled year over year? It’s becoming… | November 24, 2020
    New relic devops
    New relic devops New relic devops New relic devops
    SOURCE: New relic devops New relic devops New relic devops
    #tags#[replace: -,-New relic devops] New relic devops#tags#[/b]
    [b]Kabrinskiy Eduard[/b]
    [url=http://remmont.com]current news[/url]

    3 years ago Reply

  30. We help you stay in comfort while you take care of your business
    Arizonamece

    [b]Eduard Kabrinskiy – Devops implementation – Кабринский Эдуард

    Devops implementation
    [youtube]
    Devops implementation News update today Devops implementation
    8 Best Practices for Successful Implementation of DevOps in Your Enterprise
    DevOps. Have you ever come across this term earlier?
    These days, it becomes a buzzword across the organizations who are aspiring to tie up with the reliable IT outsourcing service providers for software development. DevOps has gained tremendous fame in the last five years by enabling companies to achieve scale.
    Previously, agile practices were in demand for successful project execution and delivery. Well, DevOps does not replace Agile practices. But, it definitely provides extended support to complete the project effectively by establishing next level collaboration and communication amongst the businesses and software service providers.
    Let’s start with what DevOps is all about.
    What is DevOps?
    The name itself suggests that it is something related to development and operations. DevOps provides a set of processes and philosophies that drive a cultural shift to an organization by promoting collaboration between the development and operations team. It contains dour key components such as collaboration, culture, practices, and tools.
    DevOps as a Service, a relatively new discipline, enables organizations to deliver high-quality software, improve time to market, boost productivity, reduce operational cost in order to serve their customers efficiently and stay competitive in the market. Moreover, it helps in a faster software release, unplanned work management, solve the critical issues quickly, and gaining trust & confidence.
    For the effective collaboration amongst the development and operations teams, DevOps provides a variety of principles and practices. In this post, we are going to cover # best DevOps practices that one should not ignore.
    8 Best Practices That Every Enterprise Should Know Before Adopting DevOps as a Service
    #1. Test Automation
    In order to compose quality code, developers need to test the software regularly. DevOps allows for early testing that gives developers an opportunity to identify and resolve the issue during software development rather than later in the process.
    Automated testings allow for quicker execution of SDLC (Software Development Life Cycle) in comparison to manual testings. Test automation can be applied to the database and networking changes, middleware configurations, and code development using regression testing and load testing.
    Test automation can be accomplished by performing varied activities such as identifying test scenarios and cases, choosing the right set of tools for automation, setting up an appropriate test environment, running test cases and analyzing results.
    #2. Integrated Configuration Management
    Configuration and change management are integral parts of the operations. Configuration management is about automation, monitoring, management and maintenance of system-wide configurations that take place across networks, servers, application, storage, and other managed services.
    Integrated configuration management enables the development teams with a bigger picture. It allows to utilize the existing services during the software development rather than investing time and efforts in reinventing the new services from scratch.
    #3. Integrated Change Management
    Change management is a process in which configurations are changed and redefined to meet the conditions of dynamic circumstances and new requirements. During the configuration management, if any changes are required then change management comes into the picture. The operations teams provide their inputs on what opportunities and consequences the change might expose at the wider level and what other systems could be impacted.
    #4. Continuous Integration
    It is a DevOps software development practice where the development team regularly updates the code changes in the repository after which the automated builds and tests run. Continuous Integration practices allow developers to carry out integrations sooner and frequently. Whereas CI tools help them to detect the integration challenges in new and existing code and solve them at the earlier phase only. Thus, CI improves collaborations amongst the teams and ultimately builds a high-quality software product.
    #5. Continuous Delivery
    Continuous Delivery is a DevOps practice where the newly developed code is updated by developers, get tested by the QA team at the different stages by applying both automated and manual testings, and once the case passes all the testings, it gets into the production. It allows the development team to build, test and release the application faster and frequently, in short cycles.
    It helps organizations to increase the number of deliveries, reduce manual works, minimize the risk of failure during production, and more.
    #6. Continuous Deployment
    The deployment process contains varied sub-processes such as code creation, testing, versioning, deployment, post-deployment, etc. In the continuous deployment process, the code is automatically deployed in the production environment once it successfully passes all the test cases in QA, UAT, and other environments. A lot of tools available that perform continuous deployment start from staging to production with minimum human intervention.
    #7. Application Monitoring
    App infrastructure monitoring is very crucial to optimize the application performance, whether it is deployed on the cloud or local data center. If a bug hits the application during the release process, then it will be turned into the failure. So, it is very important for the development teams and operations teams to consider proactive monitoring and check the performance of the application. Various tools are available for application monitoring that offer a lot of metrics related to applications, infrastructure, sales, graphs, analytics, etc.
    #8. Automated Dashboards
    Leverage the DevOps intelligence with the automated dashboard. It will provide the data along with detailed insights and reports of every operation such as the number of tests run, tests’ durations, the number of failure and success in testing. It allows to review configuration changes made to the database and server and deployments that have taken place across the system.
    The dashboard acts as a centralized hub that enables the operations team with real-time data insights which help them in selecting the right set of automation tools testings. Moreover, there are varied logs, graphs, and metrics that enable operations teams with a holistic view of changes happening in the system.
    Summing Up
    DevOps as a service is a culture that most of the software development companies are aspiring for in order to deliver high-quality application by establishing transparency and open collaboration across development teams and operations teams. DevOps practices allow for incremental implementation so organizations do not need to make required changes and updates from the beginning. Following the above mentioned DevOps practices, IT service providers can ensure to develop and deliver robust software solutions that help companies to achieve their business object effectively.
    DevOps practices allow for incremental implementation so organizations do not need to make required changes and updates from the beginning. Following the above mentioned DevOps practices, IT service providers can ensure to develop and deliver robust software solutions that help companies to achieve their business object effectively.
    Author Bio:
    Sandeep has more than 2 decades of experience in creating world class teams and driving innovation through cutting edge products and mobile app development solutions. He is passionate about creating new technology solutions and delivering “Great Customer Experience” by evangelizing “Ubiquitous Mobility”.
    Devops implementation

    Devops implementation
    [youtube]
    Devops implementation America news today Devops implementation
    Devops implementation
    DevOps. Have you ever come across this term earlier? These days, it becomes a buzzword across the o… Tagged with implement, devops, service.
    Devops implementation
    Devops implementation Devops implementation Devops implementation
    SOURCE: Devops implementation Devops implementation Devops implementation
    #tags#[replace: -,-Devops implementation] Devops implementation#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]breaking news[/url]

    3 years ago Reply

  31. We help you stay in comfort while you take care of your business
    Buffalomece

    [b]Kabrinskiy Eduard – Azure pipelines – Eduard Kabrinskiy

    Azure pipelines
    [youtube]
    Azure pipelines Top stories today Azure pipelines
    Azure pipelines
    GitHub hosts over 100 million repositories containing applications of all shapes and sizes. But GitHub is just a start—those applications still need to get built, released, and managed to reach their full potential.
    Azure Pipelines that enables you to continuously build, test, and deploy to any platform or cloud. It has cloud-hosted agents for Linux, macOS, and Windows; powerful workflows with native container support; and flexible deployments to Kubernetes, VMs, and serverless environments.

    Azure Pipelines provides unlimited CI/CD minutes and 10 parallel jobs to every GitHub open source project for free. All open source projects run on the same infrastructure that our paying customers use. That means you’ll have the same fast performance and high quality of service. Many of the top open source projects are already using Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript—and the list is growing every day.
    In this lab, you’ll see how easy it is to set up Azure Pipelines with your GitHub projects and how you can achieve an end-to-end traceability from work items to code change, commit, to build and release.
    Prerequisites
    These items are required for this lab.
    An Azure DevOps account from https://dev.azure.com
    Visual Studio Code installed from https://code.visualstudio.com.
    Setting up the environment

    On GitHub, navigate to the Microsoft/ContosoAir repository.
    If you’re not already signed in to GitHub, sign in now.

    In the top-right corner of the page, click Fork to fork the repository to your own account.

    Once the repo is forked, you need to clone the GitHub repo locally and open it in Visual Studio Code.
    Copy the clone URL of your forked repository.

    Start Visual Studio Code. Press Ctrl+Shift+P to bring the Command Palette and enter Git: Clone to clone the Git repository. You will be asked for the URL of the remote repository. Paste the URL you copied earlier. Choose the directory under which to put the local repository. Choose Open Repository when prompted.

    Lab Scenario:
    In this lab, we’ll be illustrating the integration and automation benefits of Azure DevOps. We will take on the role of helping a fictitious airline—Contoso Air—that has developed their flagship web site using Node.js. To improve their operations, they want to implement pipelines for continuous integration and continuous delivery so that they can quickly update their public services and take advantage of the full benefits of DevOps and the cloud.
    The site will be hosted in Azure, and they want to automate the entire process so that they can spin up all the infrastructure needed to deploy and host the application without any manual intervention. Once this process is in place, it will free up their technology teams to focus more on generating business value.
    Exercise 1: Setting up automated CI/CD pipelines with Azure Pipelines
    In this exercise, we will help Contoso Air revamp a critical component of their DevOps scenario. Like all airlines, they rely on their web site to generate and manage business opportunities. However, the current processes they have in place to move a change from their source code to their production systems is time-consuming and open to human error. They use GitHub to manage their source code and want to host their production site on Azure, so it will be our job to automate everything in the middle.
    This will involve setting up a pipeline so that commits to the GitHub repo invoke a continuous integration build in Azure DevOps. Once that build is complete, it will invoke a continuous delivery deployment to push the bits out to Azure, creating the required resources, if necessary. The first thing we need to do is to connect GitHub with Azure DevOps, which we can do via the Azure Pipelines extension in the GitHub Marketplace.
    Task 1: Installing Azure Pipelines from GitHub Marketplace
    Azure Pipelines is available in GitHub Marketplace which makes it even easier for teams to configure a CI/CD pipeline for any application using your preferred language and framework as part of your GitHub workflow in just a few simple steps
    Switch to the browser tab open to the root of your GitHub fork.
    Navigate to the GitHub Marketplace.

    Search for “pipelines” and click Azure Pipelines.

    Scroll to the bottom and click Install it for free. If you previously installed Azure Pipelines, select Configure access instead to skip steps 6-8.

    If you have multiple GitHub accounts, select the one you forked the project to from the Switch billing account dropdown.

    Click Complete order and begin installation.

    Select the repositories you want to include (or All repositories) and click Install.

    Task 2: Configuring a Continuous Integration Pipeline
    Now that Azure Pipelines has been installed and configured, we can start building the pipelines but we will need to select a project where the pipeline will be saved. You may select an existing or create a new Azure DevOps project to hold and run the pipelines we need for continuous integration and continuous delivery. The first thing we’ll do is to create a CI pipeline.
    Select the organization and Azure DevOps project that you want to use. If you do not have one, you can create for free.

    Select the forked repo.

    Every build pipeline is simply a set of tasks. Whether it’s copying files, compiling the source, or publishing artifacts, the existing library of tasks covers the vast majority of scenarios. You can even create your own if you have specialized needs not already covered. We’re going to use YAML, a markup syntax that lends itself well to describing the build pipeline. Note that the Node.js pipeline as a starting point based on an analysis of our source project. We’ll replace the contents with the final YAML required for our project.
    Select Node.JS as the recommended template if prompted.

    Replace the default template with the YAML below.
    Click Save and run.

    Confirm the Save and run to commit the YAML definition directly to the master branch of the repo.

    Follow the build through to completion.

    Task 3: Adding a build status badge
    An important sign for a quality project is its build status badge. When someone finds a project that has a badge indicating that the project is currently in a successful build state, it’s a sign that the project is maintained effectively.
    Click the build pipeline to navigate to its overview page.

    From the ellipses (…) dropdown, select Status badge.

    The Status badge UI provides a quick and easy way to integrate the build status wherever you want. Often, you’ll want to use the provided URLs in your own dashboards, or you can use the Markdown snippet to add the status badge to locations such as Wiki pages. Click the Copy to clipboard button for Sample Markdown.

    Return to Visual Studio Code and open the README.md file.
    Paste in the clipboard contents at the beginning of the file. Press Ctrl+S to save the file.

    From the Source Control tab, enter a commit message like Added build status badge and press Ctrl+Enter to commit. Confirm if prompted.

    In Git, only changes need to be staged first to be included in the commit. If you are prompted to choose whether you want the VS Code automatically to stage all changes and commit them directly, choose Always

    If you receive an error prompting you to configure user .name and user.email in git, open a command prompt and enter the following command to set your user name and email address:
    git config –global user.name “Your Name”
    git config –global user.email “Your Email Address”
    Press the Synchronize Changes button at the bottom of the window to push the commit to the server. Confirm if prompted.

    You will need to sign in to GitHub if you have not already signed in

    Go to the readme file on the browser and you will see the status.

    Task 4: Embedding automated tests in the CI pipeline
    Now that we have our CI successfully built, it’s time to deploy but how do we know if the build is a good candidate for release? Most teams run automated tests, such as unit tests, as a part of their CI process to ensure that they are releasing a high-quality software. Teams capture key code metrics such as code coverage, code analysis, as they run the tests, to make sure that the code quality does not drop and the technical debt if not completely eliminated, is kept low.
    We’re going to pull down the azure-pipelines.yml file that we created earlier and add tasks to run some tests and publish the test results.
    Return to Visual Studio Code.
    From the Explorer tab, open azure-pipelines.yml.

    Before we make our change, let’s take a quick look at the build tasks. There are four steps required for the build. First, deployment templates are copied to a target folder for use during the release process. Next, the project is built with NPM. After that, the built solution is archived and finally published for the release pipeline to access. With the Azure Pipelines extension for Visual Studio Code, you get a great YAML editing experience, including support for IntelliSense.
    What we are missing is testing in the pipeline. We already have unit tests for our code. We just have to run them in the pipeline. We will add tasks to run the test and publish the results and code coverage.
    Remove all the steps and replace it with the following code. Press Ctrl+S to save the file.
    From the Source Control tab, enter a commit message like Updated build pipeline and press Ctrl+Enter to commit. Confirm if prompted.

    Press the Synchronize Changes button at the bottom of the window to push the commit to the server. Confirm if prompted.

    Back in Azure DevOps, navigate to Pipelines –> Pipelines. We can see that our build pipeline has kicked off a new build.

    We can follow as it executes the tasks we defined earlier, and even get a real-time view into what’s going on at each step. When the build completes, we can review the logs and any tests that were performed as part of the process. Track the build tasks.

    Follow the build through to completion.

    Now that the build has completed, let’s check out the Tests tab to view the published tests results. We can get quantitative metrics such as total test count, test pass percentage, failed test cases, etc., from the Summary section

    The Results section lists all tests executed and reported as part of the current build or release. The default view shows only the failed and aborted tests in order to focus on tests that require attention. However, you can choose other outcomes using the filters provided

    Finally, you can use the details pane to view additional information, for the selected test case, that can help to troubleshoot such as the error message, stack trace, attachments, work items, historical trend, and more.
    From the results, we can see all 40 tests have passed which means we have not broken any changes and this build is a good candidate for deployment.
    Task 5: Configuring a CD pipeline with Azure Pipelines
    Now that the build pipeline is complete and all tests have passed, we can turn our attention to creating a release pipeline.
    Like the build templates, there are many packaged options available that cover common deployment scenarios, such as publishing to Azure. But to illustrate how flexible and productive the experience is, we will build this pipeline from an empty template.
    From the left hand menu, under Pipelines click Releases. Click New Pipeline to create a new CD pipeline to deploy the artifacts produced by the build.

    Click Empty job.

    The first item to define in a release pipeline is exactly what will be released and when. In our case, it’s the output generated from the build pipeline. Note that we could also assign a schedule, such as if we wanted to release the latest build every night.
    Select the associated artifact.

    Set Source to the build pipeline created earlier and Default version to Latest. Change the Source alias, if you want, to something like “_ContosoAir-CI” and click Add. Note that this is an identifier (typically a short name) that uniquely identifies an artifact linked to the release pipeline. It cannot contain the characters: \ / : * ? | or double quotes

    As we did with continuous integration starting on a source commit, we also want to have this pipeline automatically start when the build pipeline completes. It’s just as easy.
    Click the Triggers button on the artifact.

    Enable continuous deployment, if it is not already enabled.

    We also have the option of adding quality gates to the release process. For example, we could require that a specific user or group approve a release before it continues, or that they approve it after it’s been deployed. These gates provide notifications to the necessary groups, as well as polling support if you’re automating the gates using something dynamic, such as an Azure function, REST API, work item query, and more. We won’t add any of that here, but we could easily come back and do it later on.

    Click the pre-deployment conditions button.

    Review pre-deployment condition options.

    Select the Variables tab.

    In this pipeline, we’re going to need to specify the same resource group in multiple tasks, so it’s a good practice to use a pipeline variable. We’ll add one here for the new Azure resource group we want to provision our resources to.
    Add a resourcegroup variable that is not currently used by an existing resource group in your Azure account (“contosoair” will be used in this script).

    Also, just like the build pipeline, the release pipeline is really just a set of tasks. There are many out-of-the-box tasks available, and you can build your own if needed. The first task our release requires is to set up the Azure deployment environment if it doesn’t yet exist. After we add the task, I can authorize access to the Azure account I want to deploy to and instruct it to use the variable name we just specified for the resource group name.
    Select the Tasks tab. Click the Add task button.

    Search for “arm” and Add an ARM template deployment task.

    Select the newly created task.

    Then, select the Task version to 2.*.

    Select and authorize an Azure subscription.

    Note: You will need to disable popup-blockers to sign in to Azure for authorization. If the pop-up window hangs, please close and try it again.

    Set the Resource group to ”$(resourcegroup)” and select a Location.

    Rather than having to manually create the Azure resources required to host the web app, we defined an Azure Resource Manager or ARM template that describes the environment in JSON. This allows the environment definition to be updated and managed like any other source file. These were the files we copied to the Templates folder during the build pipeline. You can also override the template parameters as part of this configuration if required.
    Enter the settings below. You can use the browse navigation to select them from the most recent build output.

    Template: $(System.DefaultWorkingDirectory)/_ContosoAir-CI/drop/Templates/azuredeploy.json Template parameters: $(System.DefaultWorkingDirectory)/_ContosoAir-CI/drop/Templates/azuredeploy.parameters.json
    You will also need to set Override template parameters to generate an Azure app service name that is globally unique, so your name is recommended. For example, if your name is John Doe, use something like -p_environment johndoe. This will be used as part of the app service name in Azure, so please limit it to supported characters.
    When this task completes, it will have generated an Azure resource group with the resources required to run our application. However, the ARM template does some processing of the variables to generate names for the resources based on the input variables, which we will want to use in future tasks. While we could potentially hardcode those variables, it could introduce problems if changes are made in the future, so we’ll use the ARM Outputs task to retrieve those values and put them into pipeline variables for us to use. This task happens to be a 3rd party task I installed earlier from the Visual Studio Marketplace. It contains this and many other extensions for Azure DevOps from both Microsoft and 3rd parties.
    Click the Add task button.

    Search for “arm” and add ARM outputs task.

    The key variable we care about here is the name of the app service created, which our ARM template has specified as an output. This task will populate it for us to use as the “web” variable in the next task.
    Select the newly created task. Select the same subscription from the previous task and enter the same resource group variable name.

    Finally, we can deploy the app service. We’ll use the same subscription as earlier and specify the web variable as the name of the app service we want to deploy to. By this time in the pipeline, it will have been filled in for us by the ARM Outputs task. Also, note that we have the option to specify a slot to deploy to, but that is not covered in this demo.
    Click the Add task button.

    Search for “app service” and Add an Azure App Service Deploy task.

    Select the newly created task.

    Select the same subscription as earlier.

    Enter the App Service name of ”$(web)”.

    Save the pipeline.

    Select + Release and then select Create a Release

    Select Create to start a new release.
    Navigate to the release summary by clicking on the Release-1 link that appears. Click In progress to follow the release process.

    Note that it will take a few minutes (around 5 at the time of drafting) for the app to finish deploying due to heavy first-time operations.

    Select the App Service Deploy task to view the detailed log. You should find the URL to the published website here. Ctrl+Click the link to open it in a separate tab.

    This will open the web page of the Contoso Air.

    Next: GitHub integration with Azure Boards
    In addition to Azure Pipelines, GitHub users can also benefit from Azure Boards, a set of features that enable you to plan, track, and discuss work across your teams using Kanban boards, backlogs, team dashboards, and custom reporting. You can link GitHub activities from Azure Boards by mentioning them in commits and pull requests, and even automate the state transition of linked work items when pull requests are approved.
    Azure pipelines

    Azure pipelines
    [youtube]
    Azure pipelines Latest it news Azure pipelines
    Azure pipelines
    Azure pipelines GitHub hosts over 100 million repositories containing applications of all shapes and sizes. But GitHub is just a start—those applications still need to get built, released, and
    Azure pipelines
    Azure pipelines Azure pipelines Azure pipelines
    SOURCE: Azure pipelines Azure pipelines Azure pipelines
    #tags#[replace: -,-Azure pipelines] Azure pipelines#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]world news[/url]

    3 years ago Reply

  32. We help you stay in comfort while you take care of your business
    Pittsburghmece

    [b]Кабринский Эдуард – Devops slack – Eduard Kabrinskiy

    Devops slack
    [youtube]
    Devops slack News page Devops slack
    Devops slack

    Azure DevOps with Slack Approvals
    In this blog Azure DevOps with Slack Approvals you will learn about the various integration options that can be used to bring Slack and Azure DevOps closer.
    Slack is deprecating the Visual Studio Teams Services (VSTS) app from its app store. This app was built by the Slack team a few years ago and provides basic integration between VSTS and Slack. The app allowed users to get notified of the events in Azure DevOps such as the creation of pull requests, updates to work items, completed builds, and more.
    In addition to receiving notifications on events from Azure DevOps, these new apps support rich features such has:

    Creating work items using a command or messaging action
    Approving / rejecting releases from the Slack channel
    Notifying teammates with @mention support for action
    Unfurling URLs for work items, pull requests and pipelines
    Supporting notifications for new YAML based pipelines

    Azure Pipelines with Slack
    If we use Slack, we can use the Azure Pipelines app for Slack to easily monitor the events for your pipelines.
    We can set up and manage subscriptions for builds, releases, YAML pipelines, pending approvals, and more from the app and get notifications for these events in your Slack channels.
    This feature is only available on Azure DevOps Services. Typically, new features are introduced in the cloud service first, and then made available on-premises in the next major version or update of Azure DevOps Server.
    Add the Azure Pipelines app to your Slack workspace
    Navigate to Azure Pipelines Slack app to install the Azure Pipelines app to your Slack workspace. Once added, you will see a welcome message from the app as shown below.
    Use the /azpipelines handle to start interacting with the app.
    Once the app has been installed in your Slack workspace, you can connect the app to the pipelines you want to monitor. The app will ask you to authenticate to Azure Pipelines before running any commands.
    To start monitoring all pipelines in a project, use the following slash command inside a channel:
    /azpipelines subscribe [project url]
    To manage the subscriptions for a channel, use the following command:
    This command will list all the current subscriptions for the channel and allow you to add new subscriptions.
    Approve deployments from your channel :
    You can approve deployments from within your channel without navigating to the Azure Pipelines portal by subscribing to the Release deployment approval pending notification for classic Releases or the Run stage waiting for approval notification for YAML pipelines. Both of these subscriptions are created by default when you subscribe to the pipeline.
    Whenever the running of a stage is pending for approval, a notification card with options to approve or reject the request is posted in the channel. Approvers can review the details of the request in the notification and take appropriate action. In the following example, the deployment was approved and the approval status is displayed on the card.
    The app supports all the checks and approval scenarios present in Azure Pipelines portal, like single approver, multiple approvers (any one user, any order, in sequence) and teams as approvers. You can approve requests as an individual or on behalf of a team.
    Azure Boards with Slack

    If we use Slack, you can use the Azure Boards app for Slack to create work items and monitor work item activity in your Azure Boards project from your Slack channel.
    The Azure Boards app for Slack allows users to set up and manage subscriptions to create, update, and work with other item events, and get notifications for these events in their Slack channel.
    Conversations in the Slack channel can be used to create work items. Previews for work item URLs help users to initiate discussions around work.

    To create a work item, we must be a contributor to the Azure Boards project.
    To create subscriptions in a Slack channel for work item events, we must be a member of the Azure Boards Project Administrators group or Team Administrators group.
    To receive notifications, the Third party application access via OAuth setting must be enabled for the organization.
    Once after the above is done, we can see the welcome app message from the Slack channel.

    Once the app has been installed in your Slack workspace, connect and authenticate yourself to Azure Boards.
    After signing in, use the following slash command inside a Slack channel to link to the Azure Boards project which you specify with the URL :
    /azboards link [project url]
    Create a work item with a command

    With Azure Boards app you can create work items from your channel. The app supports custom work items as well.

    To create a work item, use /azboards create .
    You can create work items directly from a command by passing work item type and title as parameters. Work items will be created only if they do not have any fields to be mandatorily filled.
    /azboards create [work item type] [work item title]
    /azboards create ‘user story’ Push cloud monitoring alerts to mobile devices
    we can create a work item based on the message shown below :
    This will be the preview of the work item created from the slack channel.
    Azure Repos with Slack
    If you use Slack, you can use the Azure Repos app for Slack to easily monitor your Azure repositories.
    We can set up and manage subscriptions to receive notifications in your channel whenever code is pushed/checked in and whenever a pull request (PR) is created, updated or a merge is attempted.
    This app supports both Git and Team Foundation Version Control (TFVC) events.

    To create subscriptions in a Slack channel for repository-related events, we must be a member of the Azure Project Administrators group or Team Administrators group.
    To receive notifications, the Third-party application access via OAuth setting must be enabled for the organization.

    Once the app has been installed in your Slack workspace, connect and authenticate yourself to Azure Repos using /azrepos signin command.
    To start monitoring all Git repositories in a project, use the following slash command inside a channel ,
    /azrepos subscribe [project url]
    For GIT:
    For TFVC :
    The subscribe command gets you started with a default subscription. For Git repositories, the channel is subscribed to the Pull request created event (with target branch = master), and for TFVC repositories, the channel is subscribed to the Code checked in event.
    To view, add and remove subscriptions for a channel, use the subscriptions command:
    Example: Get notifications only when my team is in the reviewer list for a PR
    Example: Tell me when merge attempts fail due to a policy violation
    When a user pastes the URL of a PR, a preview is shown like the one in the following image. This helps to keep PR-related conversations contextual and accurate.
    These are the 3 main SLACK APP which is mainly used for Azure DevOps.We can have this integrated and do the day 2-day work being in the slack itself.
    Related Courses

    AZ-204: Developing Solutions for Microsoft Azure
    This self-paced course will help you prepare for the Azure Developer certification exam AZ-204: Developing Solutions for Microsoft Azure.

    AZ-900: Microsoft Azure Fundamentals Tutorial
    AZ-900: Microsoft Azure Fundamentals Tutorial provides foundational level knowledge on cloud concepts; core Azure services; security, privacy, compliance, and trust; and Azure pricing and support.

    AZ-400: Designing and Implementing Microsoft DevOps Solutions
    This self-paced course will help you prepare for the Azure DevOps certification exam AZ-400: Designing and Implementing Microsoft DevOps Solutions.

    Azure MCT | DevSecOps | Certified SRE | SAFe4 DevOps Practitioner | Azure 4x Certified | DevOps Institute Trainer | ITSM
    Devops slack

    Devops slack
    [youtube]
    Devops slack Current news events Devops slack
    Devops slack
    Azure DevOps with Slack Approvals allows you to manage subscriptions for builds, releases, YAML pipelines, pending approvals.
    Devops slack
    Devops slack Devops slack Devops slack
    SOURCE: Devops slack Devops slack Devops slack
    #tags#[replace: -,-Devops slack] Devops slack#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]news today[/url]

    3 years ago Reply

  33. We help you stay in comfort while you take care of your business
    Haywardmece

    [b]Кабринский Эдуард – New relic devops – Эдуард Кабринский

    New relic devops
    [youtube]
    New relic devops Important news today New relic devops
    Establish team dashboards: gather and visualize key metrics
    New Relic Insights dashboards enable collaboration and help teams provide improved digital customer experience through better performing applications and websites. Dashboards also increase employee productivity by helping teams align with business goals to better understand how their application’s performance impacts their full business.
    When issues arise, dashboards help teams narrow their search to a manageable number of endpoints and service layers, reducing the time to detection or resolution. Fostering collaboration mitigates the risk of friction, giving each stakeholder data relevant to their role.
    Prerequisites
    This tutorial assumes:

    You have instrumented your applications in New Relic.
    You understand the basics of creating dashboard widgets.
    Optional: you have added custom attributes and events.

    For more information, review New Relic’s objectives and baselines tutorial, which also includes detailed information about SLIs, SLOs, and SLAs.
    Basic process
    In this tutorial, New Relic recommends that you start with team dashboards and then build a business performance dashboard. Team dashboards let you visualize the service level indicators (SLIs) and other key performance indicators (KPIs) for your applications at a glance by providing the status of relevant components in a single view.

    Use team dashboards in your daily standups to guide your work for the day.
    Use business performance dashboards as a single source of truth for broader observation about your business as a whole.

    For more information on why development and operations teams should track services running in production and make them highly visible, see the O’Reilly DevOps Handbook
    1. Make a dashboard of your SLIs
    Team dashboards enable collaboration and provide a shared understanding of which areas of your application or organization needs attention.

    Group your dashboards by business units or functional areas as appropriate.
    Personalize the information that is relevant to your business units.
    Ask questions such as:
    Application owners: What are the top five error types affecting my application?
    Online store manager: How many people are affected by “Unable to display the shopping cart” errors?
    Executives: What is the revenue at risk when customers check online fares and availability of flights? Which channels are affected?

    After you select the metrics to capture, use the data explorer to create views of your SLIs to add to your dashboard.
    If the data explorer does not provide exactly what you need, create your own NRQL queries.

    For example, for an SLI based on HTTP status codes, use the following NRQL query:

    In addition to application performance, it’s also important to measure efficiency of your delivery pipeline. Key indicators of your team’s progress toward fully functional DevOps include:

    Deployment frequency: Companies with DevOps cultures deploy code more frequently.
    Change lead time: How quickly teams make change is a great way to measure their agility. High-performing DevOps teams average less than one hour between code commits and deploys, while traditional teams take between one to six months.
    Mean time to recover (MTTR): Every organization has failures. Modern teams recover in minutes, not hours. Having precise measurements of MTTR helps IT managers monitor the people, processes, and technology that enable rapid recovery and head off problems before they result in significant downtime.

    For more information about these metrics, review New Relic’s tutorial about iterating and measuring impact.
    As you gather interesting views for your first dashboard, don’t overthink it. Consider this initial dashboard a discussion starter.
    2. Share the first version of your dashboard
    After creating a basic dashboard that charts some of the key data for your business and your team, share the dashboard with your team and other stakeholders. As you engage others for feedback, you may find metrics are missing. At the same time, do not be afraid to remove a metric that is not actionable or does not make sense.
    A well-formed team dashboard can help facilitate productive daily discussions and effective collaboration across your team. Good discussion questions include:

    Does this dashboard make sense to us?
    Are we measuring the right things?
    What assumptions are we making to capture this data?
    Is what we are measuring actionable? What would we do if we were alerted on this SLI?
    Could someone else understand this dashboard without explanation?
    What would the CTO think if they saw this dashboard?

    Also, determine how this team dashboard can be most helpful in your daily workflow. For example, check your dashboards during your daily standup to see if you need to re-prioritize their daily work.
    3. Create the team dashboard
    Now that you have buy-in from the team, build out a full dashboard with the widgets your team has agreed on. At the application level, your goal is to ensure that your dashboard tracks both of these criteria:

    What is your application’s health; for example, memory usage and transaction counts
    What extent is your team achieving its business goals; for example, the number of new users, user session lengths, percent of users active, etc.

    Insights lets you create many chart types for the most logical data to track. Recommendation: At a minimum, include the following:

    Response time: Area chart
    Availability percentage: Billboard
    Errors: Pie chart
    Throughput: Area chart
    Page views: Billboard

    If your app is particularly complex, create a collection of linked team dashboards (data apps) for a curated, application-like experience. When it’s complete, share the dashboard with your team and any upstream or downstream teams as appropriate.
    4. Create a business performance dashboard
    Your business performance dashboard will give your teams an overview of how users are experiencing your app. Most New Relic customers want to know how their apps are experienced across different cohorts, such as geographic locations or device types.

    Companies in many industries consider the following key performance indicators (KPIs) essential to business performance. Use the following NRQL examples to build widgets for your dashboards.
    Session count NRQL query
    To run a NRQL query for Browser session count:
    Session duration NRQL query
    To run a NRQL query for Browser session duration:
    Page views NRQL query
    To run a NRQL query for Browser page views:
    Page render time NRQL query
    To run a NRQL query for Browser page rendering:
    Conversion funnel NRQL query
    To run a NRQL query for page conversion funnel:
    Error percentage NRQL query
    To run a NRQL query for APM error percentage:
    Apdex NRQL query
    To run a NRQL query for APM Apdex:
    DOM readiness NRQL query
    To run a NRQL query for Browser DOM readiness:
    For more help
    If you need more help, check out these support and learning resources:

    Browse the Explorers Hub to get help from the community and join in discussions.
    Find answers on our sites and learn how to use our support portal.
    Run New Relic Diagnostics, our troubleshooting tool for Linux, Windows, and macOS.
    Review New Relic’s data security and licenses documentation.

    New relic devops

    New relic devops
    [youtube]
    New relic devops Top headlines New relic devops
    New relic devops
    When issues arise, dashboards in New Relic Insights help teams narrow their search to a manageable number of endpoints, reducing the time to resolution.
    New relic devops
    New relic devops New relic devops New relic devops
    SOURCE: New relic devops New relic devops New relic devops
    #tags#[replace: -,-New relic devops] New relic devops#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]world news[/url]

    3 years ago Reply

  34. We help you stay in comfort while you take care of your business
    Akronmece

    [b]Eduard Kabrinskiy – Azure devops date format – Кабринский Эдуард

    Azure devops date format
    [youtube]
    Azure devops date format News page Azure devops date format
    Azure DevOps Wiki: Manage your project documentation and collaboration
    Azure DevOps has built-in support for Wiki, which gives a lot of flexibility to document the entire project from a single location. There was a time that your documentation was a Microsoft Word document in a file server, which in most cases, only the author could remember its location and use. Using Azure DevOps, we can control the overall project, document, store the files, keep track of the changes, and have automatic build and releases in a single platform that is available on the Internet and works with your Windows, Linux, and Mac the same way.
    The Azure DevOps Wiki uses markdown language, which was was created in 2004 by John Gruber in collaboration with Aaron Swartz, and it has been used in GitHub and Azure DevOps.
    The language is not a complete language aimed at replacing HTML. The goal was to keep it easy to write and read, and there is some syntax that emulates the most common HTML tags. We are going to see a few examples of markdown language, and after that, we will be ready to create and format our documentation.
    It is important to remember the hierarchy in Azure DevOps. You can have multiple organizations, and each organization may contain several projects. Every single project has a Wiki. The boundaries of a Wiki are at the project level and not the organization. Keep that in mind when planning your documentation structure.
    Markdown language summary
    The following summary may help you to understand the differences between markdown language and HTML. Start typing on the Azure DevOps Wiki page, and as you need some formatting resources, check the table below.
    You will be writing in markdown language quickly. Just allot some time to memorize the characters required to format text.
    Item HTML equivalent Markdown language Heading
    Title H1
    # Title H1 Heading*
    Title H2
    ## Title H2 Horizontal Line
    Text in Bold Text in bold **Text in bold** Text in Italic Text in Italic *Text in Italic* Text in strikethrough Text strike
    Quote – > Quote Level 1 Quote – >> Quote Level 2 (indented) Links Azure Portal [Azure Portal](http://www.azure.microsoft.com Code Get-AzVM ““
    Image![Azure Map](http://address.ca/img.svg)Unordered List

    Item 01

    – Item 01Ordered List

    Item Numbered 01

    1. Item Numbered 01
    * Heading can go up to H6 in HTML and in Markdown we represent with six ######
    We will cover the details in Azure DevOps in our next section, but for now we can see on the left side the code in markdown language and how would be the output on the right side. That is going to be the experience to the user reading the document.

    Creating your first AzureDevops Wiki
    By default, any new project does not have any Wiki configured, and we can create them in two different ways: WikiProject or a Wiki as Code. We can also combine them to get the benefit of both worlds.
    Let’s start creating a WikiProject, which creates a Repo Git to store all information about the Wiki, including the .md files, images, and so forth. Although it is a repo, it is hidden from view, which means that you cannot access it through repos or even project settings.
    If you are as curious as I am, then you can access going to this website: https://dev.azure.com/ /
    .wiki (Item 1). The master branch for this repo is called
    .wiki (Item 3) and we can see every single page created through the portal being an individual .md file (Item 2)

    Note: At the time of writing, there is no easy way to delete a WikiProject. The result will be the editor to create a new page, where we can start writing in our markdown language to start the documentation of our project.

    Using the Azure DevOps Portal to manage your Wiki is a breeze. On the left side, we can create a page using the New page (Item 1) button, and the new page will be added at the same level of the pages that are being listed above (parent level).
    We can also create sub-pages to an existent page, and it helps to nest documentation on the same thread. To do that, click on the ellipsis on the desired page (item 2), and click on Add sub-page (item 3).
    In the same location, we can change the order of the pages by dragging and dropping them as we see fit.

    The Overview item provides a glimpse of the project, including stats, members, and a brief description of the project, which can be integrated with a Wiki.
    The integration is effortless to implement. Logged in to the Azure DevOps Portal, click on Overview, and then + Add Project Description button. In the new blade, provide a short description, select either Readme file or Wiki. In this section, we are going to choose the Wiki (Item 3), and the first page (which should be the main one or welcome page) will show up, click on Save (Item 4).
    From now on, the documentation will be on the first page of our Azure DevOps Portal. A simple visual aid that will help you to memorize is that the first page always has a Home icon associated with it.
    Important note: The first page of the Wiki will always be displayed. If you change later on, the change will reflect automatically on the main page.

    Adding Wiki-as-a-code
    By default, when creating new repos, the option to add a README.md is defined automatically, and it is a file that can be written in markdown language the same way that we configure our pages in the WikiProject.
    We can see that in action by looking at any existing repo. In the image below, we can choose the repo/README.md to keep the documentation up to date.

    If the team decides to update the documentation at the repo level, we can take advantage of that process and incorporate it into the current Wiki.
    The process to add code as Wiki is this: Click on Overview (Item 1), Wiki (Item 2), click on the dropdown menu at
    .wiki (Item 3), and then on Publish code as a Wiki (item 4).

    In the new blade, select the repository, the branch, the folder (by default, the README.md is on the root), and define a Wiki name (we are going to use the current repo name added by the suffix -Wiki). Click on Publish.

    When we open Wiki (Item 1), we can navigate between the WikiProject (Item 2) or select from the Code Wiki that has been added previously (item 3). After choosing a Code Wiki, all the files with .md extension will be listed, click on them, and the content will be displayed on the right side.

    Integrating both methods, we can have a WikiProject to be the main page of your project and use the actual README.md on your repos to have more concise and specific information about each workload/service of your current project.
    Azure DevOps Wiki: Get started
    So, now you know the basics of markdown language, how to manage pages in your brand-new Azure DevOps Wiki and the differences, and how we can integrate both Wiki Code and WikiProject.
    Now it is your turn to start integrating Wiki into your Azure DevOps project!
    Azure devops date format

    Azure devops date format
    [youtube]
    Azure devops date format Top news stories today Azure devops date format
    Azure devops date format
    Not being able to find project documentation is way too common. Use Azure DevOps’ built-in support for Wiki to document the project from a single location.
    Azure devops date format
    Azure devops date format Azure devops date format Azure devops date format
    SOURCE: Azure devops date format Azure devops date format Azure devops date format
    #tags#[replace: -,-Azure devops date format] Azure devops date format#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]breaking news[/url]

    3 years ago Reply

  35. We help you stay in comfort while you take care of your business
    Miramarjala

    [b]Кабринский Эдуард – Devops deployment process – Кабринский Эдуард

    Devops deployment process
    [youtube]
    Devops deployment process Main news today Devops deployment process
    The Eight Phases of a DevOps Pipeline
    Let’s break down the phases of a DevOps pipeline and clarify some common terms.

    In my last article, I covered the basics of DevOps and highlighted the benefits that have motivated so many organisations to shift for this new model for software development. This article will build on the last, so if you haven’t already, go check it out ??
    What is DevOps?
    The simplest introduction to DevOps and the benefits it can provide to your organisation.
    medium.com
    When talking about DevOps, it’s useful to divide the process into phases which come together to make a DevOps pipeline. This way, we can break down the problem of describing the tools and processes used throughout the various phases.
    Different people will apply their scalpel to cut the pipeline in different places and come up with a different list of phases, but the result usually describes the same process. The important thing is to be consistent with terminology within your organisation so everybody is on the same page.

    It’s worth noting that, while it’s useful to break the DevOps pipeline into phases to make it easier to discuss, in practice it is a continuous workflow followed by the same team or blend of teams, depending on the organisational structure. There are no hard barriers between each of the phases of the pipeline.

    This article will describe the eight-phase DevOps pipeline we use at Taptu and define some other terms commonly used when talking about DevOps.

    The Plan stage covers everything that happens before the developers start writing code, and it’s where a Product Manager or Project Manager earns their keep. Requirements and feedback are gathered from stakeholders and customers and used to build a product roadmap to guide future development. The product roadmap can be recorded and tracked using a ticket management system such as Jira, Azure DevOps or Asana which provide a variety of tools that help track project progress, issues and milestones.
    The product roadmap can be broken down into Epics, Features and User Stories, creating a backlog of tasks that lead directly to the customers’ requirements. The tasks on the backlog can then be used to plan sprints and allocate tasks to the team to begin development.
    Once the team had grabbed their coffees and had the morning stand-up, the developments can get to work. In addition to the standard toolkit of a software developer, the team has a standard set of plugins installed in their development environments to aid the development process, help enforce consistent code-styling and avoid common security flaws and code anti-patterns.
    This helps to teach developers good coding practice while aiding collaboration by providing some consistency to the codebase. These tools also help resolve issues that may fail tests later in the pipeline, resulting in fewer failed builds.
    Build
    The Build phase is where DevOps really kicks in. Once a developer has finished a task, they commit their code to a shared code repository. There are many ways this can be done, but typically the developer submits a pull request — a request to merge their new code with the shared codebase. Another developer then reviews the changes they’ve made, and once they’re happy there are no issues, they approve the pull-request. This manual review is supposed to be quick and lightweight, but it’s effective at identifying issues early.
    Simultaneously, the pull request triggers an automated process which builds the codebase and runs a series of end-to-end, integration and unit tests to identify any regressions. If the build fails, or any of the tests fail, the pull-request fails and the developer is notified to resolve the issue. By continuously checking code changes into a shared repository and running builds and tests, we can minimise integration issues that arise when working on a shared codebase, and highlight breaking bugs early in the development lifecycle.
    Once a build succeeds, it is automatically deployed to a staging environment for deeper, out-of-band testing. The staging environment may be an existing hosting service, or it could be a new environment provisioned as part of the deployment process. This practice of automatically provisioning a new environment at the time of deployment is referred to as Infrastructure-as-Code (IaC) and is a core part of many DevOps pipelines. More on that in a later article.
    Once the application is deployed to the test environment, a series of manual and automated tests are performed. Manual testing can be traditional User Acceptance Testing (UAT) where people use the application as the customer would to highlight any issues or refinements that should be addressed before deploying into production.
    At the same time, automated tests might run security scanning against the application, check for changes to the infrastructure and compliance with hardening best-practices, test the performance of the application or run load testing. The testing that is performed during this phase is up to the organisation and what is relevant to the application, but this stage can be considered a test-bed that lets you plug in new testing without interrupting the flow of developers or impacting the production environment.
    Release
    The Release phase is a milestone in a DevOps pipeline — it’s the point at which we say a build is ready for deployment into the production environment. By this stage, each code change has passed a series of manual and automated tests, and the operations team can be confident that breaking issues and regressions are unlikely.
    Depending on the DevOps maturity of an organisation, they may choose to automatically deploy any build that makes it to this stage of the pipeline. Developers can use feature flags to turn off new features so they can’t be seen by the customers until they are ready for action. This model is considered the nirvana of DevOps and is how organisations manage to deploy multiple releases of their products every day.
    Alternatively, an organisation may want to have control over when builds are released to production. They may want to have a regular release schedule or only release new features once a milestone is met. You can add a manual approval process at the release stage which only allows certain people within an organisation to authorise a release into production.
    The tooling lets you customise this, it’s up to you how you want to go about things.
    Deploy
    Finally, a build is ready for the big time and it is released into production. There are several tools and processes that can automate the release process to make releases reliable with no outage window.
    The same Infrastructure-as-Code that built the test environment can be configured to build the production environment. We already know that the test environment was built successfully, so we can rest assured that the production release will go off without a hitch.
    A blue-green deployment lets us switch to the new production environment with no outage. Then the new environment is built, it sits alongside the existing production environment. When the new environment is ready, the hosting service points all new requests to the new environment. If at any point, an issue is found with the new build, you can simply tell the hosting service to point requests back to the old environment while you come up with a fix.
    Operate
    The new release is now live and being used by the customers. Geat work!
    The operations team is now hard at work, making sure that everything is running smoothly. Based on the configuration of the hosting service, the environment automatically scales with load to handle peaks and troughs in the number of active users.
    The organisation has also built a way for their customers to provide feedback on their service, as well as tooling that helps collect and triage this feedback to help shape the future development of the product. This feedback loop is important — nobody knows what they want more than the customer, and the customer is the world’s best testing team, donating many more hours to testing the application than the DevOps pipeline ever could. You need to capture this information, it’s worth its weight in gold.
    Monitor
    The ‘final’ phase of the DevOps cycle is to monitor the environment. this builds on the customer feedback provided in the Operate phase by collecting data and providing analytics on customer behaviour, performance, errors and more.
    We can also do some introspection and monitor the DevOps pipeline itself, monitoring for potential bottlenecks in the pipeline which are causing frustration or impacting the productivity of the development and operations teams.
    All of this information is then fed back to the Product Manager and the development team to close the loop on the process. It would be easy to say this is where the loop starts again, but the reality is that this process is continuous. There is no start or end, just the continuous evolution of a product throughout its lifespan, which only ends when people move on or don’t need it any more.
    Devops deployment process

    Devops deployment process
    [youtube]
    Devops deployment process Headlines Devops deployment process
    Devops deployment process
    In my last article, I covered the basics of DevOps and highlighted the benefits that have motivated so many organisations to shift for this new model for software development. This article will build…
    Devops deployment process
    Devops deployment process Devops deployment process Devops deployment process
    SOURCE: Devops deployment process Devops deployment process Devops deployment process
    #tags#[replace: -,-Devops deployment process] Devops deployment process#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]top news[/url]

    3 years ago Reply

  36. We help you stay in comfort while you take care of your business
    Antiochjala

    [b]Кабринский Эдуард – Github devops – Кабринский Эдуард

    Github devops
    [youtube]
    Github devops Today news live Github devops
    Accelerating DevOps with GitHub and Azure

    Posted on May 6, 2019
    Building better apps, faster, is a key enabler of digital transformation for every company. Many businesses are facing external pressures to be more agile, and this in turn is putting more demands on development and operations teams to increase the velocity of building and delivering digital solutions. This is where DevOps practices, starting from agile development methodologies, can help.
    Microsoft is committed to helping all teams with DevOps – from developers working in small, distributed teams in the open source world to enterprises operating at scale to achieve more. Today, over 10,000 open source projects such as CPython, Pandas, and OptiKey rely on GitHub and Azure DevOps to collaborate and accelerate the pace of project innovation. Watch this video to hear from developers working on the Pandas project. Large enterprises like Royal Dutch Shell rely heavily on software to drive business growth and use GitHub and Azure DevOps to realize business agility at scale. More than 2,800 developers at Shell collaborate using GitHub to ship apps and AI models with Azure DevOps, targeting cloud, on-premises systems and edge devices.
    These are just a few examples of customers improving their DevOps practices supported by GitHub and Azure DevOps. We continue to innovate to make our DevOps services even easier and more productive. We’re excited to announce new innovations to help our customers build better apps, faster.
    GitHub and Azure DevOps:

    Unified Pipelines with YAML-defined CI/CD
    Azure Pipeline Integration with Azure Kubernetes Service
    Simplified Purchasing for Azure DevOps
    Active Directory Support for GitHub Enterprise
    Sign in to Azure and Azure DevOps with your GitHub account
    Visual Studio Subscriptions and GitHub Enterprise

    App Center:

    Mobile Backend as a Service (MBaaS) integration

    GitHub and Azure DevOps
    It’s been less than a year since Microsoft acquired GitHub, the largest developer community on the planet, with over 36 million developers from nearly every country. GitHub is at the heart of the open source community and will always be an open platform that supports all developers. And, we’re listening, responding to developer feedback and delivering over 100 new features in the last six months.
    Building solutions that developers love while also meeting the needs of enterprises is a core principle of our DevOps services investments. Together, GitHub and Azure DevOps provides an end-to-end experience for development teams to easily collaborate, build and release code to Azure, on-premises or any cloud.
    Unified Pipelines with YAML-defined CI/CD
    Azure Pipelines, a core part of Azure DevOps, allows for the creation of Continuous Integration in a declarative way using YAML documents. With our new updates, development teams can now leverage the same YAML documents to build multi-stage pipelines-as-code for both Continuous Integration and Continuous Delivery. This was one of the biggest requests from our customers. Adding the ability to create deployment pipelines with YAML files and store them in source control helps drive a tighter feedback loop between development and operation teams, relying on clear, readable documents.
    Pipelines integration with Kubernetes
    Not only have we made it simple to collaborate around CI/CD pipelines, Azure Pipelines can now be easily integrated with Kubernetes clusters. Connect to Azure Kubernetes Service with only a few clicks, or connect to Kubernetes running on-premises or on any public cloud. Azure Pipelines analyzes your repository and suggests the right set of YAML templates to configure your pipeline and all the required Kubernetes manifest files for deploying to the cluster. Improving diagnostics by providing rich information of your pod details such as logs, container images running on the pods and the image detail view. This rich capability can target any Kubernetes environment such as Azure Kubernetes Service, Amazon EKS and Red Hat OpenShift.

    Simplified Purchasing for Azure DevOps
    Based on feedback, we have simplified how you license and pay for Azure DevOps capabilities. Here are just a few of the changes:

    Azure Artifacts moves to a consumption-based model, with 2GB free for each organization.
    Basic License Model is now a fixed price and includes Azure Artifacts for Azure DevOps Server
    Introduced a new Basic and Test Plans license option.

    GitHub support for Azure Active Directory
    Historically, integrating enterprise-grade security has required significant work with GitHub. We heard loud and clear from customers, they want seamless integration with Azure Active Directory (AD). Azure AD is the most widely adopted identity and security system in the enterprise with more than 200 million enterprise users.
    Today, we’re excited to announce that GitHub Enterprise will support Azure Active Directory. GitHub customers can now leverage existing Azure Active Directory solution for group membership in GitHub, which reduces administration time, improves auditability and increases security.
    Sign in to Azure and Azure DevOps with your GitHub account
    We also know there are a lot of developers who have GitHub personal accounts and don’t have a Microsoft managed identity. We’re also excited to announce that GitHub users can now sign in to Azure and Azure DevOps using an existing GitHub account. Just go to the Azure portal or Azure DevOps page and click the GitHub icon to login. This integration makes it even easier for developers to go from code to cloud.
    Visual Studio Subscriptions with GitHub Enterprise
    In addition to making identity and sign-on easier and more secure, we are making it easier to purchase GitHub Enterprise. Today, we’re pleased to announce Visual Studio Subscription with GitHub Enterprise offerings that give enterprise customers an easy and economical way to purchase both Visual Studio and GitHub Enterprise at one low price.
    App Center
    Visual Studio App Center automates the lifecycle of your iOS, Android, Windows, and macOS apps. Today, we’re announcing the inclusion of Azure Mobile Backend as a Service (MBaaS) capabilities into App Center. MBaaS capabilities help developers build applications faster without the need for managing infrastructure through the following Azure Mobile Apps services:

    Corporate sign-in – Connect your apps to an Azure Active Directory Tenant to manage users, identity providers, and user flows.
    Push notifications – Use push notifications to send millions of personalized messages to iOS, Android, Windows, or Nokia X devices in seconds.
    Offline data sync – Use offline sync service powered by Azure Comsos DB, to improve your app experience and make it easier to persist data across multiple devices.

    Conclusion
    There’s never been a better time to get started with DevOps services and mobile development on Azure. We’re excited to hear your feedback on these new capabilities designed to help you build better apps, faster.
    Github devops

    Github devops
    [youtube]
    Github devops America latest news Github devops
    Github devops
    With GitHub and Azure DevOps, Microsoft is helping developers plan, build and ship any app, from large enterprise solutions to open source projects.
    Github devops
    Github devops Github devops Github devops
    SOURCE: Github devops Github devops Github devops
    #tags#[replace: -,-Github devops] Github devops#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]latest news[/url]

    3 years ago Reply

  37. We help you stay in comfort while you take care of your business
    Baltimorejala

    [b]Эдуард Кабринский – Visual studio 2019 azure devops – Кабринский Эдуард

    Visual studio 2019 azure devops
    [youtube]
    Visual studio 2019 azure devops Current news Visual studio 2019 azure devops
    Continuous Integration & Continuous Deployment of SSDT Projects: Part 2, Creating Azure DevOps pipelines
    CI/CD for databases is tricky! Let me show you how you can do it by using an SSDT project template and Azure DevOps.

    Radoslav Gatev
    Microsoft Azure MVP, MCSE: Cloud Platform and Infrastructure, Software Architect, blogger, conference speaker, open source contributor, ukulele strummer
    More posts by Radoslav Gatev.

    Radoslav Gatev

    Part 1: Make it repeatable and self-sufficient turned out to be a big hit in my blog. The Visual Studio database project template reached 20k installs. I got a number of questions about it and two of my readers (thanks!) asked me to do a second part to tell you about Release Management of databases in Azure DevOps (formerly Visual Studio Team Services, formerly Visual Studio Online).
    I have seen projects that have CI/CD defined for their application code and not for their databases. And when they do releases they trigger the pipelines for the applications and deploy the database manually because it’s tricky, they say. So, let me share what’s the way I found to deal with all this complexity.
    Before we start, what’s new about this project template?
    It’s been exactly a year, and the Visual Studio project template already has 20k installs, and the interest in doing release management of database changes is still pretty high! Thanks, everyone! That’s why I decided to enable the support for Visual Studio 2019! It’s available now at Visual Studio Marketplace! If you have any ideas on how to make it better go ahead and suggest them! Or even open a pull request in GitHub.
    Getting started with Azure DevOps
    Azure DevOps is the evolution of VSTS (Visual Studio Team Services). It is the result of years Microsoft using their own tools and developing a process for building and delivering products in an efficient and effective way. An interesting thing is that Azure DevOps is being developed by using Azure DevOps. What a recursion, huh!
    I think of Azure DevOps as the union of VSTS (Visual Studio Team Services), TFS (Team Foundation Server) and Azure, with some improvements and a few extras added. So, what was called VSTS is now Azure DevOps Services which is still a cloud offering with 99.9% SLA. And what was called Team Foundation Server (TFS) is now called Azure DevOps Server – the on-premises offering that’s built on a SQL Server back end.
    Azure DevOps is split into multiple subservices:

    Azure Boards – Powerful work tracking with Kanban boards, backlogs, team dashboards, and custom reporting.
    Azure Repos – a set of version control tools that you can use to manage your code. Two types of version control systems, Git and Team Foundation Version Control (TFVC).
    Azure Pipelines – CI/CD that works with any language, platform, and cloud. Every open source project gets unlimited CI/CD minutes for free.
    Azure Artifacts – Maven, npm, and NuGet package feeds from public and private sources.
    Azure Test Plans – All in one planned and exploratory testing solution.

    If you want to learn more about the DevOps practices at Microsoft, visit the DevOps Resource Center.In this post, we are going to focus on Release Management with Azure Pipelines.
    Prerequisites
    Set up your project to use the Continuous Deployment template which is based on the standard SSDT (SQL Server Data Tools) project. Follow the steps from Part 1: Make SSDT projects repeatable and self-sufficient, if you haven’t done so.
    Create an Azure DevOps project if you haven’t done so. You can keep your source code both internally in Azure Repos or externally (GitHub, Subversion, Bitbucket Cloud or External Git).
    You will need access to an active Azure subscription.
    Let’s create a build pipeline
    As promised, we’ll be looking at pipelines in this post. The first part of the chain is the definition of a build pipeline. After the code has been committed to Azure Repos, a build can be triggered. Of course, there are always options to schedule it or to manually trigger it.
    Navigate to dev.azure.com. Go to Pipelines in your project and create a new Build pipeline. Then you have to tell it where your source code is. Choose an empty template.
    Your build pipeline will look like this:
    A build pipeline for the SSDT project
    Let’s review each task individually.
    Visual Studio Build
    This task will build the whole solution while. If your database project is configured properly it will produce a DACPAC file which becomes the main artifact containing the definition of the entire database.You can pass MSBuild Arguments, change build platform and build configuration. I’ve exposed variables for Platform and Configuration in order to make it reusable. Platform is set to any cpu, Configuration is set to release.
    Copy Files to the staging directory
    Publish Artifact
    This produces the end result of the build which will be consumed by the Release pipeline.
    The configuration of Publish Build Artifacts.
    DacPac Schema Compare
    As its name suggests, this tasks shows the schema changes of your database between two builds and outputs the results to the build summary. This task comes as a part of Colin’s ALM Corner Build & Release Tools which you have to install from the marketplace.
    You must enable the Allow Scripts to Access OAuth Token option which can be found in Build Options, as you can see below:
    Click on the agent job which is named “Phase 1” in my case. Then you will be able to check this checkbox.
    If you don’t do so, you will get an error stating that the token for authenticating cannot be found.
    You can see how this task is configured below. Compiled DacPac Path is used to find the dacpac produced by the current build. Drop Name is the name of the artifact produced by the previous build.

    Go ahead and trigger a build!
    You will find two reports in the Build summary generated by this task. Therefore you will be able to see what is going to be executed against your real database (Assuming no one messed around with it manually):
    Schema Change Summary and Change Script
    Let’s create a release pipeline
    Go to Pipelines > Releases and create a new empty Release pipeline.
    Set artifacts
    Add a new artifact – choose Build as an artifact source type and select the Build pipeline producing it.
    Go and edit the first stage of you release pipeline. By the end of this part it will look like this:
    The definition of a Stage named Dev
    Resource Template Deployment
    We are going to deploy against an Azure SQL Database in this post. So, let’s go straight to the Azure portal and create it? No! Because that’s not DevOps! Infrastructure as code, as a key DevOps practice, says that you have to manage and provision everything through machine-readable definition files, rather than physical hardware configuration or interactive configuration tools.
    We need an ARM (Azure Resource Manager) Template for a database. Let’s take a look at one such:
    The template above accepts 3 input parameters – sqlDatabaseName, sqlAdministratorLogin and sqlAdministratorLoginPassword. Then it creates a SQL server containing a single database. The location of the resources comes from the resource group. And finally, it outputs the FQDN of the server so the tasks following the Resource template deployment will be able to use it.
    By no means, this is the perfect definition of a Resource template but it’s simple enough and it works. For the sake of simplicity, I have this uploaded to a gist in GitHub which is being easily referenced by URL upon deployment. Ultimately, you will have more projects than just the database in your CI/CD pipelines. So you will probably be going to consider using one or more Resource Group projects in Visual Studio, building and validating them, exposing them as a separate artifact and eventually deploying them.
    Go to your first Stage in the Release Pipeline and add Azure Resource Group Deployment task. You will have to create a Service connection of type Azure Resource Manager which will grant Azure DevOps the rights to access your Azure subscription.
    It will look like this:
    The configuration of the Azure Resource Group Deployment task.
    Maybe you have noticed that I’ve exposed some variables like $(ResourceGroup) and $(DatabaseName). That’s because I prefer to make those things reusable so you can copy them across environments or even create Task groups.
    You have to be careful with the Deployment mode. Choose Incremental as it leaves unchanged resources that exist in the resource group but are not specified in the template as opposed to Complete mode which deletes resources that are not present in your templates.
    ARM Outputs
    I found the ARM Outputs pretty useful because it lets you use the output values of an ARM template as variables in Azure Pipelines. ARM templates are the thing forming up your resource names based on some input parameters, and they also have direct access to Microsoft Azure.
    Install this extension from Azure DevOps Marketplace.
    The configuration of the ARM Outputs task
    Azure SQL Database Deployment
    And finally, that is the task executing the actual database deployment. You give it a DACPAC file and it deploys all your database changes using SqlPackage.exe under the hood. Apart from the essential parameters you can pass additional parameters like /p:IgnoreAnsiNulls=True , /p:IgnoreComments=True , /p:BlockOnPossibleDataLoss=false or /p:DropObjectsNotInSource=true just to name a few.
    I am using the same variables as in the ARM template deployment.
    The configuration of the Azure SQL Database Deployment task.
    Set up environments
    Having this reusable definition, we can go ahead and clone the stage in order to define multiple environments. Add as many as you would like. And not only that, you can create the relationships between them, the pre- and post-deployment conditions, approvals and other options.
    A release with three environments (stages) – Dev, UAT, and Production. Pipeline variables across environments (stages).
    I hope this is helpful for you! I would love to know more about how you typically deploy your databases in your organization.
    Ads keep servers running.
    Subscribe to The blog of Radoslav Gatev
    Get the latest posts delivered right to your inbox
    Visual studio 2019 azure devops

    Visual studio 2019 azure devops
    [youtube]
    Visual studio 2019 azure devops Breaking news headlines Visual studio 2019 azure devops
    Visual studio 2019 azure devops
    CI/CD for databases is tricky! Let me show you how you can do it by using an SSDT project template and Azure DevOps.
    Visual studio 2019 azure devops
    Visual studio 2019 azure devops Visual studio 2019 azure devops Visual studio 2019 azure devops
    SOURCE: Visual studio 2019 azure devops Visual studio 2019 azure devops Visual studio 2019 azure devops
    #tags#[replace: -,-Visual studio 2019 azure devops] Visual studio 2019 azure devops#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]latest news today[/url]

    3 years ago Reply

  38. We help you stay in comfort while you take care of your business
    LOUISIANAjala

    [b]Kabrinskiy Eduard – Azure ml ops – Eduard Kabrinskiy

    Azure ml ops
    [youtube]
    Azure ml ops Today’s national news headlines Azure ml ops
    Introduction to MLOPS with Microsoft Azure

    pradeep natarajan
    Aug 18 В· 9 min read

    Machine learning is quite a buzz these days and companies are investing a lot to onboard artificial intelligence capabilities into their system. Yet a study published by deeplearning.ai in Dec 2019 suggests that only 22% of the companies are able to successfully productionize machine learning projects. It sounds quite paradoxical because there had been a surge in data scientist talents in the market recently thanks to the hype around AI/ML.
    One might challenge the qualit y of data scientist’s skill here, but a closer inspection will reveal that the real issue lies somewhere else behind the failures of ML projects. Before we do a deep dive in this reason we should first understand who all are involved in an ML project.
    Teams in a Machine Learning Project
    In a medium to large organizations, who have a mature IT infrastructure a machine learning project cannot be done by a single team of data scientists, in this process, many teams have to be involved. Usually, you will see these teams getting engaged in an ML project.
    ? Data Scientist — They are the ones who do data analysis and model creation. They usually do POC on Jupyter notebooks but putting their POC into production requires collaboration from multiple teams mentioned below.
    ? Data Engineer — This team is responsible for creating data pipelines from one or many sources to bring data to a data lake or data warehouse where data scientists fetch data for model building.
    ? Testing Team — This team is required to test the model created by the data scientist to ensure that it inferences with good accuracy on testing data, before deploying it on production.
    ? IT Team — They are responsible for application development and maintenance. A data scientist will need an IT team to deploy ML POC into the production.
    ? Operations Team — They are supposed to monitor the application in production and maintain its health. Data scientists need them to monitor the performance of their ML model with some KPIs.
    Challenges for Successful Machine Learning Projects
    In their report,” Predicts 2020: Artificial Intelligence — the Road to Production”, Gartner had mentioned “A litmus test of organizations’ maturity is how quickly and repeatedly they can get these AI systems into production. Our surveys are showing that organizations are not managing to do this as quickly as they had hoped”.
    This is indeed the truth because companies are struggling to keep their ML projects afloat in production and there are many root causes behind this, but the below two reasons summarize the situation.
    1. Lack of Team Collaborations
    On boarding many teams in a project can sometimes become a recipe for failure. Sometimes data scientists are multi skilled to engineer basic data pipelines for data collection and do in-depth testing, but they cannot do away with the involvement of IT Teams and Operations.
    But unfortunately, all these teams end up working in silos either unintentionally or intentionally (due to office politics). For example, the IT team may not have knowledge of the ML model which data scientists asked to deploy. But after deployment, the IT team realizes that data scientist created the model with some assumptions which is adversely affecting the live services and they have to do a rollback.
    Such kind of lack of collaboration spiral downs the ML projects to closure either before they are deployed in production or shortly after it goes into production.
    2. Unable to do Iterations for Improvements
    Just like any software, the initial versions of ML models are always far from stable. In traditional software, new iterations of developments are easy because it only requires code changes. But in the machine learning system, the new version will require not only coding but also data collection for the training of the model, which increases the timeline for the development of a new version.
    Once developed, the model requires testing for quality assurance before production deployment. And after development, feedbacks are expected from operations KPIs for creating the next version.
    The process described above for a machine learning project lacks automation and involves too many teams. Hence it is not possible to deliver new iterations of the ML projects into production frequently with ease. Due to the lack of improvements in business value over a period of time, these ML projects are subsequently closed down.

    Meet MLOPs — The DevOps for Machine Learning Projects
    Not very far back, the software industry was also grappling with many similar issues while executing software projects. They came up with the process of DevOps to streamline development and operations and also to automate several manuals touch points with what is known as DevOps pipeline.
    The Machine Learning industry has recently come up with the concept of MLOPs where they have borrowed the same principles of DevOps and tailor-made them to the needs for machine learning projects. Similar to DevOps, in MLOPs the focus is to create automated ML pipeline models for Continuous Integration (CI), Continuous Deployment (CD), and Continuous Testing (CT).
    To put this in layman terms, the goal of MLOps is to create automated ML pipelines to fast track the time taken from ideation to execution and deployment, that can be repeatedly used for multiple iterations in the life-cycle of a machine learning project.
    MLOPs Pipelines
    Much is said about the DevOps pipelines but breaking down the hype, a pipeline is nothing but a series of predefined steps that are executed from start to end automatically, every time you plan to introduce new changes.
    A big end to end pipeline can constitute of smaller pipelines for designated work. In an MLOPs architecture, you will generally find the following pipelines –
    1) Data Pipeline
    To train any ML model you need to collect data from one or many sources through a process known as the ETL process which stands for Extract, Transform, and Load. In this process, data is extracted from the source system(s), transformed for cleaning and preprocessing, and then loaded into a target data store (data lake). This entire process is encapsulated in the Data Pipe Line which is usually invoked in the initial stage of other MLOPs Pipeline for data collection.
    2) Environment Pipeline
    Your development environment may require not only external libraries but even some custom libraries and dependencies. If these libraries are not imported, then the code will not run. An Environment Pipeline ensures that all these dependent libraries are loaded properly and consistently every time before proceeding further in the MLOPs Pipeline.
    3) Training Pipeline
    This pipeline is responsible for doing the actual training of our ML model. Before the execution of this pipeline, you will usually call the data pipeline and environment pipeline for provisioning data store and loading dependencies. Once training is completed the weights of the models created are saved at the end so that they can be used further in the subsequent pipelines.
    4) Continuous Integration Pipeline
    This pipeline enforces the continuos integration culture of DevOps where it is required to commit small changes into the code repository frequently instead of a big bang commit. Each of these small commits has to undergo the scrutiny of code quality and unit testing so that any issue can be revealed to the project team early in the development phase. In Python, for code quality, we can use linting libraries like PyLint and Flake8.
    5) Testing Pipeline
    The model created should be tested properly before it can be deployed in live because your model will have to work with the existing production code of the application. So unless your model passes all the testing and validations it will not be allowed to pass through this pipeline and the overall execution of MLOPs pipeline is aborted and the failure report is sent to the data scientist for review. Only if testing is successful, the model is allowed for the deployment pipeline.
    6) Continuous Deployment Pipeline
    This pipeline takes care of the deployment process and is usually invoked at the last when the earlier pipelines have run and the model is given a green chit for deployment. Depending upon the deployment strategy and architecture, it can be deployed directly into a server or packaged in a container like Docker or even Kubernetes across clusters.
    The pipelines that we discussed above can be created as per your needs and are connected together to create the bigger architecture of MLOPs pipeline. Moreover, you may find different names of the pipeline but the underlying purpose will remain the same.
    Now the question remains how to create MLOPs pipeline. You can very well write custom pipelines from scratch but an easier and time-efficient option is to use cloud services like Google Cloud Platform, Microsoft Azure, AWS.
    We will give you a nutshell view of how you can implement MLOPs in Microsoft Azure.
    MLOPs with Microsoft Azure
    Microsoft Azure provides many robust services in its ecosystem to create an end to end MLOps pipeline through its product Azure DevOps. It was launched in 2018 but existed in a crude form as Visual Studio Teams since 2006. Azure DevOps comes with the following features and services –
    ? Azure Board for better collaboration within the team and agile planning.
    ? Azure Pipelines for creating CI/CD pipeline architecture.
    ? Azure Repos for git repository to ensure proper version control of the codebase.
    ? Azure Artifacts for managing dependencies in the codebase.
    ? Azure Test Plan for planned and exploratory testing solutions.
    Azure DevOps is completely flexible and agnostic to both platform and clouds. This means it is not required for you to use Azure Machine Learning Services or Azure Storage in order to use Azure DevOps, you can use your own tools and services. It supports all types of languages (Python, Java, PHP, etc) and OS (Windows, Linux, and macOS) and amazingly Azure DevOps pipelines can be connected to other clouds like GCP and AWS.
    Creating Pipelines with Azure DevOps
    Shifting our focus to the earlier discussion of MLOPs pipelines, we can create these pipelines by using the Azure Pipeline service in Azure DevOps.
    Assumingly, you are registered with the Azure platform and have created a project under Azure DevOps the steps are –

    Select the Pipelines option in the left sidebar and click on “New Pipeline”.

    2. In the next section, select the appropriate code repository option as per your needs.

    3. In the Configuration section, you need to select the YAML file associated with your pipeline. ( You will have to prepare the YAML file in advance and define all the tasks for your pipeline.)

    4. You will then be asked to review the YAMLS file and then “Run” the pipeline.
    5. Once the pipeline runs you can see the status of execution of each task in the pipeline.

    Deployment Pipeline for Docker in Azure Ops
    We mentioned earlier that the strategy for the deployment pipeline may include Docker which is used to create containers for packaging all your code and dependencies. Let us see how we can do this in Azure DevOps.

    As a prerequisite, you should have Dockerfile created in your code repository.

    2. Create the container registry by running below commands in the Azure CLI

    3. Create a new Azure pipeline for continuous build and push of the Docker image to the container registry created above. The YAML file of the pipeline will look like this –

    4. Create a Web App for Containers from the Azure Portal and configure it with the registry we have created above.
    5. In Azure pipelines, create a new release pipeline by selecting the template for Azure App Service Deployment to continuously integrate dockerized changes.
    Conclusion
    In this article, we understood why it is so difficult to successfully deliver an ML project into production and how MLOPs pipelines, a concept derived from DevOPs, can help in the successful delivery of the ML projects. We discussed various types of pipelines possible in MLOPs and also saw how to create these pipelines in Azure DevOps platform.
    Azure ml ops

    Azure ml ops
    [youtube]
    Azure ml ops The news Azure ml ops
    Azure ml ops
    Machine learning is quite a buzz these days and companies are investing a lot to onboard artificial intelligence capabilities into their system. Yet a study published by deeplearning.ai in Dec 2019…
    Azure ml ops
    Azure ml ops Azure ml ops Azure ml ops
    SOURCE: Azure ml ops Azure ml ops Azure ml ops
    #tags#[replace: -,-Azure ml ops] Azure ml ops#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]news today[/url]

    3 years ago Reply

  39. We help you stay in comfort while you take care of your business
    Rochesterjala

    [b]Эдуард Кабринский – Ml devops – Кабринский Эдуард

    Ml devops
    [youtube]
    Ml devops Today’s national news headlines in english Ml devops
    ML Ops: Machine Learning as an Engineering Discipline
    As ML matures from research to applied business solutions, so do we need to improve the maturity of its operation processes

    Cristiano Breuel
    Jan 3 В· 10 min read
    So, your company decided to invest in machine learning. You have a talented team of Data Scientists churning out models to solve important problems that were out of reach just a few years ago. All performance metrics are looking great, the demos cause jaws to drop and executives to ask how soon you can have a model in production.
    It should be pretty quick, you think. After all, you already solved all the advanced scienc-y, math-y problems, so all that’s left is routine IT work. How hard can it be?
    Pretty har d , it turns out. Deeplearning.ai reports that “only 22 percent of companies using machine learning have successfully deployed a model”. What makes it so hard? And what do we need to do to improve the situation?
    Let’s start by looking at the root causes.
    Challenges
    In the world of traditional software development, a set of practices known as DevOps have made it possible to ship software to production in minutes and to keep it running reliably. DevOps relies on tools, automation and workflows to abstract away the accidental complexity and let developers focus on the actual problems that need to be solved. This approach has been so successful that many companies are already adept at it, so why can’t we simply keep doing the same thing for ML?
    The root cause is that there’s a fundamental difference between ML and traditional software: ML is not just code, it’s code plus data. An ML model, the artifact that you end up putting in production, is created by applying an algorithm to a mass of training data, which will affect the behavior of the model in production. Crucially, the model’s behavior also depends on the input data that it will receive at prediction time, which you can’t know in advance.

    While code is carefully crafted in a controlled development environment, data comes from that unending entropy source known as “the real world”. It never stops changing, and you can’t control how it will change. A useful way to think of their relationship is as if code and data live in separate planes, which share the time dimension but are independent in all others. The challenge of an ML process is to create a bridge between these two planes in a controlled way.

    This fundamental disconnect causes several important challenges that need to be solved by anyone trying to put an ML model in production successfully, for example:

    Slow, brittle and inconsistent deployment
    Lack of reproducibility
    Performance reduction (training-serving skew)

    Since the word “data” has been already used several times in this article, you may be thinking of another discipline that could come to our rescue: Data Engineering. And you would be right: Data Engineering does provide important tools and concepts that are indispensable to solving the puzzle of ML in production. In order to crack it, we need to combine practices from DevOps and Data Engineering, adding some that are unique to ML.
    Thus, ML Ops can be defined by this intersection:

    Thus, we could define ML Ops as follows:

    ML Ops is a set of practices that combines Machine Learning, DevOps and Data Engineering, which aims to deploy and maintain ML systems in production reliably and efficiently.

    Let’s now see what this actually means in more detail, by examining the individual practices that can be used to achieve ML Ops’ goals.
    Practices
    Hybrid Teams
    Since we’ve already established that productionizing an ML model requires a set of skills that so far were considered separate, in order to be successful we need a hybrid team that, together, covers that range of skills. It is of course possible that a single person might be good enough at all of them, and in that case we could call that person a full ML Ops Engineer. But the most likely scenario right now is that a successful team would include a Data Scientist or ML Engineer, a DevOps Engineer and a Data Engineer.
    The exact composition, organization and titles of the team could vary, but the essential part is realizing that a Data Scientist alone cannot achieve the goals of ML Ops. Even if an organization includes all necessary skills, it won’t be successful if they don’t work closely together.
    Another important change is that Data Scientists must be proficient in basic software engineering skills like code modularization, reuse, testing and versioning; getting a model to work great in a messy notebook is not enough. This is why many companies are adopting the title of ML Engineer, which emphasizes these skills. In many cases, ML Engineers are, in practice, performing many of the activities required for ML Ops.
    ML Pipelines
    One of the core concepts of Data Engineering is the data pipeline. A data pipeline is a series of transformations that are applied to data between its source and a destination. They are usually defined as a graph in which each node is a transformation and edges represent dependencies or execution order. There are many specialized tools that help create, manage and run these pipelines. Data pipelines can also be called ETL (extract, transform and load) pipelines.
    ML models always require some type of data transformation, which is usually achieved though scripts or even cells in a notebook, making them hard to manage and run reliably. Switching to proper data pipelines provides many advantages in code reuse, run time visibility, management and scalability.
    Since ML training can also be thought of as a data transformation, it is natural to include the specific ML steps in the data pipeline itself, turning it into an ML Pipeline. Most models will need 2 versions of the pipeline: one for training and one for serving. This is because, usually, the data formats and way to access them are very different between each moment, especially for models that are served in real-time requests (as opposed to batch prediction runs).

    The ML Pipeline is a pure code artifact, independent from specific data instances. This means that it’s possible to track its versions in source control and automate its deployment with a regular CI/CD pipeline, a core practice from DevOps. This lets us connect the code and data planes in a structured and automated way:

    Note that there are two distinct ML pipelines: the training pipeline and the serving pipeline. What they have in common is that the data transformations that they perform need to produce data in the same format, but their implementations can be very different. For example, the training pipeline usually runs over batch files that contain all features, while the serving pipeline often runs online and receives only part of the features in the requests, retrieving the rest from a database.
    It is important, however, to ensure that these two pipelines are consistent, so one should try to reuse code and data whenever possible. Some tools can help with that goal, for example:

    Transformation frameworks like TensorFlow Transform can ensure that calculations based on training set statistics, like averages and standard deviations for normalization, are consistent.
    Feature Stores are databases that store values that are not part of a prediction request, for example features that are calculated over a user’s history.

    Model and Data Versioning
    In order to have reproducibility, consistent version tracking is essential. In a traditional software world, versioning code is enough, because all behavior is defined by it. In ML, we also need to track model versions, along with the data used to train it, and some meta-information like training hyperparameters.
    Models and metadata can be tracked in a standard version control system like Git, but data is often too large and mutable for that to be efficient and practical. It’s also important to avoid tying the model lifecycle to the code lifecycle, since model training often happens on a different schedule. It’s also necessary to version data and tie each trained model to the exact versions of code, data and hyperparameters that were used. The ideal solution would be a purpose-built tool, but so far there is no clear consensus in the market and many schemes are used, most based on file/object storage conventions and metadata databases.
    Model validation
    Another standard DevOps practice is test automation, usually in the form of unit tests and integration tests. Passing these tests is a prerequisite for a new version to be deployed. Having comprehensive automated tests can give great confidence to a team, accelerating the pace of production deployments dramatically.
    ML models are harder to test, because no model gives 100% correct results. This means that model validation tests need to be necessarily statistical in nature, rather than having a binary pass/fail status. In order to decide whether a model is good enough for deployment, one needs to decide on the right metrics to track and the threshold of their acceptable values, usually empirically, and often by comparison with previous models or benchmarks.
    It’s also not enough to track a single metric for the entirety of the validation set. Just as good unit tests must test several cases, model validation needs to be done individually for relevant segments of the data, known as slices. For example, if gender could be a relevant feature of a model, directly or indirectly, tracking separate metrics for male, female and other genders. Otherwise, the model could have fairness issues or under-perform in important segments.
    If you manage to get models validated in an automated and reliable way, along with the rest of the ML pipeline, you could even close the loop and implement online model training, if it makes sense for the use case.
    Data validation
    A good data pipeline usually starts by validating the input data. Common validations include file format and size, column types, null or empty values and invalid values. These are all necessary for ML training and prediction, otherwise you might end up with a misbehaving model and scratching your head looking for the reason. Data validation is analogous to unit testing in the code domain.
    In addition to basic validations that any data pipeline performs, ML pipelines should also validate higher level statistical properties of the input. For example, if the average or standard deviation of a feature change considerably from one training dataset to another, it will likely affect the trained model and its predictions. This could be a reflection of actual change in the data or it could be an anomaly caused by how the data is processed, so it’s important to check and rule out systemic errors as causes that could contaminate the model, and fix them if necessary

    Monitoring
    Monitoring production systems is essential to keeping them running well. For ML systems, monitoring becomes even more important, because their performance depends not just on factors that we have some control over, like infrastructure and our own software, but also on data, which we have much less control over. Therefore, in addition to monitoring standard metrics like latency, traffic, errors and saturation, we also need to monitor model prediction performance.
    An obvious challenge with monitoring model performance is that we usually don’t have a verified label to compare our model’s predictions to, since the model works on new data. In some cases we might have some indirect way of assessing the model’s effectiveness, for example by measuring click rate for a recommendation model. In other cases, we might have to rely on comparisons between time periods, for example by calculating a percentage of positive classifications hourly and alerting if it deviates by more than a few percent from the average for that time.
    Just like when validating the model, it’s also important to monitor metrics across slices, and not just globally, to be able to detect problems affecting specific segments.
    Summary
    As ML matures from research to applied business solutions, so do we need to improve the maturity of its operation processes. Luckily, we can extend many practices from disciplines that came before ML.
    The following table summarizes ML Ops’ main practices and how they relate to DevOps and Data Engineering practices:
    Ml devops

    Ml devops
    [youtube]
    Ml devops National news Ml devops
    Ml devops
    ML Ops (or MLOps) is a set of practices uniting Machine Learning, DevOps and Data Engineering, aiming to deploy and maintain ML systems reliably and efficiently.
    Ml devops
    Ml devops Ml devops Ml devops
    SOURCE: Ml devops Ml devops Ml devops
    #tags#[replace: -,-Ml devops] Ml devops#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]local news[/url]

    3 years ago Reply

  40. We help you stay in comfort while you take care of your business
    Downeyjala

    [b]Кабринский Эдуард – K8s devops – Eduard Kabrinskiy

    K8s devops
    [youtube]
    K8s devops Latest national news in english K8s devops
    Handling Secrets in Azure DevOps Deployment Pipelines and K8s
    Avoid leaking secure configuration using CD processes and Kubernetes Secrets
    John Clarke
    Mar 20, 2019 В· 6 min read

    Introduction
    Almost all back end software requires some form of ‘secret’ configuration — that is configuration that the software needs in order to run, but knowledge of the values used needs to be restricted. Examples of this form of configuration could include

    Usernames and passwords, such as database credentials
    Credentials for external services, such as API keys and client secrets
    Digital Certificates and ofther key material for TLS and crypt
    Anything else your production systems need that your devs don’t!

    Traditionally most systems have been relatively na?ve in their handling of this form of configuration, generally resulting it being stored for all to see in source control, and generally known by far more people than is advisable.
    In this article we’ll s ee how we can combine the variable system in Azure DevOps deployment pipelines with Kubernetes’ built support for secret objects to create a secure solution to this problem with minimal effort.
    Before You Begin — The Ops Considerations
    Right, before we even start with the ‘techie’ bit of this, we need to consider where, operationally, these secrets are coming from, and who comprises the ‘select few’ that know them. Let’s take, as an example, creating a new service that will be backed by a DBaaS offering such as MongoDB’s Atlas product.
    First, someone is going to need to sign up to the service and create a new database instance. At this point they’ll be presented with the credentials needed by the service in order to access the database. So already we have two sets of ‘secret’ credentials; clearly, we don’t want just anyone being able to log in to the Atlas console to create and delete database, and equally we don’t want just any one being able to access the database itself.
    We won’t address the first set of credentials in this article, as the software doesn’t need them, this is a purely operational consideration; who has access to create an and manage this ‘aaS’ style offerings. Generally, you want to ensure this is a very small set of senior folks, and that’s its probably the same people who will have access the actual credentials needed by the software services we’ll be covering in the rest of this article.
    Obviously the second set of credentials are what we’re discussing here, now we need to think about how we take get them from whatever dashboard or tool was used to generate them, and get them into the pipeline, using the process presented below. It’s at this point we need a manual, operational, process to securely move these items. The simplest option is to ensure that whoever creates the items is responsible for adding them to the pipeline and doesn’t have to disclose them in doing so.
    Configuring your DevOps Pipeline
    Within a deployment pipeline you can set variables, which can then be accessed by tasks within the pipeline as it executes. These variables can be set per-environment, so that each environment can have unique values for each of the variables, allowing for per-environment configuration whilst maintaining common variable names. For more information in pipeline variables, variable groups, and the built-in variables, please see the DevOps documentation.
    For now, we create a per-environment variables to hold our secrets and add the secret to the variable based on our operational process definition, as outlined above. Once set, the padlock can be clicked, which locks down the value of the variable. Once ‘locked’ the value is obscured, so can’t be seen by anyone looking in DevOps, and also can’t be ‘unlocked’ — from here on in only tasks running in the pipeline have access to the value. It’s now secure, and secret!
    Deploying secrets into K8s
    So now we have our secrets all sorted in DevOps, we need to get them into Kubernetes, without disclosing them along the way. Fortunately, that’s easier than it sounds!
    First off, we need to create some YAML which we can use to create our secrets in our cluster. The below snippet gives an example of creating an opaque text secret, but Kubernetes’ documentation covers plenty of other use cases. The key in this case, though, is that the value of the secret isn’t held in the YAML; doing so would undo everything we’re looking to achieve here. Instead we place a token where the value would be, and that token matches the name of the variable we set up in DevOps to hold that actual value, plus a marker so that we can identify it as a token when DevOps runs our pipeline. In this example, we use a couple of hashes (##)
    Now we can tweak the deployment step(s) to replace the tokens with the values from our secret variables right before they’re deployed to the cluster. Note that you can’t do this in the build step, as they would mean storing the modified YAML as a build artefact, and again, that would have the values we don’t want disclosed in it. This needs to be done during your deployment, so the file is updated, used to configure the cluster, and then destroyed thus providing a totally secure and opaque workflow.
    There are several tokenizers available for DevOps pipelines, personally I like the ‘Replace Tokens’ task, which is part of the ‘Colins ALM Corner’ set of DevOps extensions. It’s easy to configure and does a great job of doing just what we need. Just set it up in the deployment pipeline for each environment, keyed to look for tokens based on the twin hashes we put in the YAML file.
    So, now you have a complete YAML within the pipeline, just use the Kubectl step to deploy it to your cluster and create the K8s secret objects ready to be consumed by your pods.
    Accessing K8s secrets in your pods
    Once created in Kubernetes, consuming the secrets into your pods is as simple as binding them to environment variables, or for files mounting them as volumes, in the usual way.
    The Gotchas of secure secrets
    So, now you have a solution where your secrets are truly secret, and known only to ‘the select few’; perfect, right?
    Well, it will be, right up until something goes wrong and your devs want to poke around in the prod database for a bit to see what’s going down. Back in the day, they did that without even thinking, because everyone knew the credentials, and if they didn’t, they could always ask someone. Now they don’t know, and no one will tell them. So now you’re going to need to think how the bigger picture works. That might mean custom tooling to allow controlled access to the database or going back and looking at the basics like logging and metrics. Either way, you’ll need to resolve this, as you need to be able to support your systems, but we all know we shouldn’t be poking around in production databases anyway!
    Speaking of logging, the other gotcha is that secrets are only secrets so long as you don’t tell anyone. Keep an eye out for logs accidentally disclosing secret information, it happens way more than you might think — for example logging the connection string, including credentials, when failing to connect to a database. All your hard work and good intentions undone in a second’s oversight.
    The final ‘gotcha’ should be really obvious — if you don’t have wide knowledge, and a hundred ways of finding out what your secrets are (hey, they’re really secret now!) you’ll probably want to ensure you have a secure mechanism for secondary storage of them for that one day when you need them. Again, this is an entirely operational consideration, but it’s really easy to overlook as you start working in more secure ways.
    K8s devops

    K8s devops
    [youtube]
    K8s devops Top news K8s devops
    K8s devops
    Almost all back end software requires configuration that the software needs in order to run, but knowledge of the values used needs to be restricted. Here we look at one approach to this.
    K8s devops
    K8s devops K8s devops K8s devops
    SOURCE: K8s devops K8s devops K8s devops
    #tags#[replace: -,-K8s devops] K8s devops#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]top news[/url]

    3 years ago Reply

  41. We help you stay in comfort while you take care of your business
    APARTMENTjala

    [b]Эдуард Кабринский – Azure devops server licensing – Кабринский Эдуард

    Azure devops server licensing
    [youtube]
    Azure devops server licensing Current news headlines Azure devops server licensing
    Azure devops server licensing
    Managing Open-source security and license with WhiteSource
    Overview
    WhiteSource is the leader in continuous open source software security and compliance management. WhiteSource integrates into your build process, irrespective of your programming languages, build tools, or development environments. It works automatically, continuously, and silently in the background, checking the security, licensing, and quality of your open source components against WhiteSource constantly-updated de?nitive database of open source repositories.
    WhiteSource provides WhiteSource Bolt, a lightweight open source security and management solution developed specifically for integration with Azure DevOps and Azure DevOps Server. It works per project and does not offer real-time alert capabilities like the Full platform which is generally recommended for larger development teams, wanting to automate their open source management throughout the entire software development lifecycle (from the repositories to post-deployment stages) and across all projects and products.
    What’s covered in this lab
    This lab shows how you can use WhiteSource Bolt with Azure DevOps to automatically detect alerts on vulnerable open source components, outdated libraries, and license compliance issues in your code. You will be using WebGoat, a deliberately insecure web application, maintained by OWASP designed to teach web application security lessons.
    Azure DevOps integration with WhiteSource Bolt will enable you to:

    Detect and remedy vulnerable open source components.
    Generate comprehensive open source inventory reports per project or build.
    Enforce open source license compliance, including dependencies’ licenses.
    Identify outdated open source libraries with recommendations to update.

    Before you begin
    Refer the Getting Started page before you follow the exercises.
    Use Azure DevOps Demo Generator to provision the WhiteSource project on your Azure DevOps Organization.
    Exercise 1: Activate WhiteSource Bolt
    In your Azure DevOps Project, under Pipelines section, go to White Source Bolt tab, provide your Work Email, Company Name and click Get Started button to start using the Free version.

    Upon activation, the below message is displayed.

    Exercise 2: Trigger a build
    You have a Java code provisioned by the Azure DevOps demo generator. You will use WhiteSource Bolt extension to check the vulnerable components present in this code.
    Go to Pipelines section under Pipelines tab, select the build definition WhiteSourceBolt and click on Run pipeline to trigger a build. Click Run (leave defaults).

    To view the build in progress status, click on job named Phase 1.

    While the build is in progress, let’s explore the build definition. The tasks that are used in the build definition are listed in the table below.
    TasksUsagenpmInstalls and publishes npm packages required for the buildMavenbuilds Java code with the provided pom xml fileWhiteSource Boltscans the code in the provided working directory/root directory to detect security vulnerabilities, problematic open source licensesCopy Filescopies the resulting JAR files from the source to the destination folder using match patternsPublish Build Artifactspublishes the artifacts produced by the build
    Once the build is completed, click back navigation to see the summary which shows Test results, Build artifacts etc. as shown below.

    Navigate to WhiteSource Bolt Build Report tab and wait for the report generation of the completed build to see the vulnerability report.

    Exercise 3: Analyze Reports
    WhiteSource bolt automatically detects OpenSource components in the software including transitive dependencies and their respective licenses.
    Security Dashboard
    The security dashboard shows the vulnerability of the build. This report shows the list of all vulnerable open source components with Vulnerability Score, Vulnerable Libraries, Severity Distribution.

    You can see the opensource license distribution and a detailed view of all components and links to their metadata and licensed references.
    Outdated Libraries
    WhiteSource Bolt also tracks outdated libraries in the project, getting all the detailed information and links to newer versions and recommendations.

    Summary
    With Azure DevOps and WhiteSource Bolt integration, you can shift-left your open source management. The integration allows you to have alerts in real time, on vulnerabilities and other issues to help you take immediate action.
    Azure devops server licensing

    Azure devops server licensing
    [youtube]
    Azure devops server licensing Today’s news stories Azure devops server licensing
    Azure devops server licensing
    Azure devops server licensing Managing Open-source security and license with WhiteSource Overview WhiteSource is the leader in continuous open source software security and compliance
    Azure devops server licensing
    Azure devops server licensing Azure devops server licensing Azure devops server licensing
    SOURCE: Azure devops server licensing Azure devops server licensing Azure devops server licensing
    #tags#[replace: -,-Azure devops server licensing] Azure devops server licensing#tags#[/b]
    [b]Kabrinskiy Eduard[/b]
    [url=http://remmont.com]new[/url]

    3 years ago Reply

  42. We help you stay in comfort while you take care of your business
    MiamiGardensjala

    [b]Эдуард Кабринский – Informatica devops – Kabrinskiy Eduard

    Informatica devops
    [youtube]
    Informatica devops Latest national news Informatica devops
    DevOps Architect
    Location: Redwood City, California US
    Job Number: 28642
    Position Title: DevOps Engineer Architect
    External Description:
    DEVOPS ARCHITECT, INFORMATICA CLOUD OPERATIONS
    Our Company
    Everything Informatica does begins and ends with data. Simply stated, we make great data – data that is connected, clean and safe — ready to use so that all enterprises can be data ready and put their unique information potential to work. A data ready enterprise is decision-ready, customer-ready, application-ready, cloud-ready and regulation-ready. And by design, our Intelligent Data Platform delivers great data to enable our customers to be ready for anything.
    Our Team Informatica Cloud is the clear market leader of the Integration Platform-as-a-Service (iPaaS) providers. We provide Data Integration, Data Quality, Information Lifecycle Management, Test Data Management, Master Management and other Enterprise Information Management solution as a service on the cloud. Thousands of customers rely on our service to move billions of records daily.
    The Cloud Operations team is responsible for the management, monitoring and operation of our Cloud service. We utilize cutting edge Cloud hosting, monitoring and deployment technologies to deliver a world class, non-disruptive Cloud user experience to our customers.
    Our Ideal Candidate You are a bright, organized, and dedicated Software-as-a-Service Operations Lead with hands-on experience in designing LAMP architecture; deep knowledge of both QA testing methodology and application management; and expert skills in Automation and Orchestration Management of the product delivery process. You are comfortable with supporting and operating high availability Cloud services. You are self-motivated with strong problem solving and troubleshooting skills.
    Your Responsibilities
    In this individual contributor role, you will report to the Sr. Director of Operations, Informatica Cloud, and be based in the company’s Redwood City, CA location. You will:

    Manage operation project across multiple teams as well as multiple regions
    Architect the SaaS Cloud deployment, collaborating with R&D teams to design the next generation of application deployment architecture, and developing the orchestration strategy and automation framework for the product delivery process
    Manages and appropriately escalates delivery obstacles, risks, issues, and changes associated to the product release and deployment initiatives.
    Assigns and monitors work of technical personnel, ensuring that application deployment and operation is done in the best possible way and review systems throughout the deployment and patching processes.
    Evaluates technological choices (Cloud Services related, Operating and Automation tools related, System and Network related) by querying providers and providing evaluations (Proof Of Value) of each solution include ROI evaluations in the present and future implications, limitations, and opportunities.
    Possesses excellent verbal and written communication skills and the ability to interact professionally with a diverse group of developers, product owners, and support subject matter experts (GCS).
    Exercises broadly delegated authority for planning, directing, coordinating, administering, and executing both routine and complex technical elements of technical operations
    Manages analysis and approval of new code through security through Pen tests and Qualys Reports. Be an advocate for security and performance standards in the organization.
    Manages operational aspect of production and nonproduction application stacks, training members of the team across geographies, and validating compliance with procedures and checklists related to disk space usage, monitoring solutions, deployment, conventions, access to the production and development sources, source control access and usage, performance monitoring, code modifications validation (Git).
    Works within Cloud Trust, cross-functionally and with vendors, in order to successfully identify, prioritize, and resolve issues and provide subject matter expertise for enhancements, developments, and operational improvements to IICS.
    Identifies trending gaps or issues in day-to-day performance of Informatica Production Platform microservices and Products including by active monitoring, alert management, reporting, and process reviews
    Works closely with the Engineering, DevOps and QA in release planning, preparation, validation, post release monitoring, and ongoing monitoring.
    Possesses high level understanding in the areas of production operation of production Java Application, rolling patch management, Zero Downtime upgrade strategies, database upgrades without impacting production, and Disaster Recovery system design
    Provides process improvement recommendations based on best practices and industry standards.
    Be responsible for maintaining vulnerability and security standards.
    Provide evidence for SOC2 compliance to auditors.
    Operate and administer the Informatica Intelligent Cloud Services (IICS) and infrastructure using Chef/Jenkins, Spinnaker, Kubernetes deployment, SumoLogic and Panopta monitoring tools
    Operate and administer the Informatica Intelligent Cloud Services (IICS) components deployed on Amazon AWS, Microsoft Azure, and Google Cloud Platform (GCP).
    Cost Governor for all Cloud products on non-production and production environment.
    Support the agile software development process among cross-functional teams to ensure smooth product delivery
    Perform incident/alert troubleshooting, problem analysis and provide high quality solutions to technical issues
    Liaison with the Informatica Security Council to manage and defined ICS application security compliance standards
    Design and implement the PSR strategy at the application and database layer to improve application efficiency and effectiveness
    Define and implement the Disaster Recovery process
    Manage RCA, Incident Process, and Risk Analysis of Informatica Cloud Services
    Create scalable alerting and auto remediation systems.
    Perform advanced troubleshooting and monitoring of our systems to ensure adequate SLA and capacity requirement.

    Your Qualifications

    10 or more years of overall service delivery experience, of which at least 5 years are in managing Cloud operation production environments and critical business services.
    Proficiency with the Amazon AWS ecosystem
    Advanced LAMP (Linux+Apache Tomcat + MySQL+PHP) Web Application management expertise is required
    Experience with Networking technology: such as DYN/NetNames and Load balancer is required
    Strong knowledge of Cloud automation deployment process using Orchestration management, such as Ansible, Chef, and Puppet, is required
    Deep knowledge in setting up logging/monitoring tool, such as Nagios, and Logstash is required
    Strong Scripting language knowledge, such as Python, Shell, or Perl, is required
    Deep knowledge of performing compliance and security audits and experience in defining, configuring, and implementing disaster recovery process is a big plus
    Deep knowledge of MySQL DB schema design and performance tuning tasks is a big plus
    Ability to work under considerable pressure managing multiple tasks and priorities
    Demonstrated ability to produce high-quality results with attention to details
    Strong interpersonal and team communications skills are essential
    Ability to rapidly learn new software, frameworks, open source tools and development languages.
    Configuration and maintenance of common infrastructure such as Apache, haproxy.
    Strong knowledge of large-scale internet service architecture
    Strong understanding of Unix and TCP/IP fundamentals.
    BS degree in Computer Science or equivalent; advanced degree a plus.

    Informatica offers a competitive compensation package that includes base salary, medical, retirement and employee stock purchase (ESP) programs, flexible time off and more. Our generous benefits vary depending on your geographic work location. It’s an exciting time to work at Informatica. You can learn more about our company, our products and our services at http://www.informatica.com. We are an Equal Opportunity Employer (EOE).
    City: Redwood City
    State: California
    Alternative Location(s) :
    Community / Marketing Title: DevOps Architect
    Company Profile:
    Who We Are
    Informatica empowers the world’s most progressive companies to realize data-driven digital transformations that are changing the world. To do this, we live by our We “DATA” values. We Do Good, Act As One Team, Think Customer First, and Aspire For The Future. Together, we are conquering the impossible with data and changing what was once unimaginable into what’s now common—making lives richer, businesses stronger, and our world better.

    Unleash Your Potential
    A career with Informatica gives you all the opportunities and benefits that can only come from working for the trusted industry leader. By joining our team, you’ll be able to solve real-life problems, make a difference, have a global impact, and join a supportive group of globally diverse teammates. We encourage you to be yourself, grow with us and unleash your potential.
    EEO Employer Verbiage:
    Navigating COVID-19 and Beyond

    Since March 2020, our INFA Team have been working remotely to do our part to slow the spread of COVID-19
    During this time, work-life balance and the well-being of our team has been a priority for us. In lieu of not being in the office, our teams are actively participating online via video chats. You’ll find groups connecting for online games, virtual break rooms, online training, yoga, morning coffee, and so much more!
    We’re also offering all teammates the ability to expense home office items (monitor, chair, desk, etc…) to ensure that you’re as comfortable as possible

    All qualified applicants will receive consideration for employment without regard to race, sex, color, religion, sexual orientation, gender identity, national origin, protected veteran status, or on the basis of disability.
    Life at Informatica
    Follow us to meet our team, learn more about life, careers, and events at Informatica. Conquering the Impossible with data, come join #LifeAtINFA!

    Travel Requirement: Limited
    Location_formattedLocationLong: Redwood City, California US
    Informatica devops

    Informatica devops
    [youtube]
    Informatica devops Breaking news headlines Informatica devops
    Informatica devops
    GR8 Job! at Informatica
    Informatica devops
    Informatica devops Informatica devops Informatica devops
    SOURCE: Informatica devops Informatica devops Informatica devops
    #tags#[replace: -,-Informatica devops] Informatica devops#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]breaking news today[/url]

    3 years ago Reply

  43. We help you stay in comfort while you take care of your business
    Houstonjala

    [b]Kabrinskiy Eduard – Gene kim devops – Эдуард Кабринский

    Gene kim devops
    [youtube]
    Gene kim devops Today’s big news Gene kim devops
    Gene Kim tells why we should all be on a DevOps journey
    DevOps advocate Gene Kim is a tireless campaigner for the software development methodology and an enthusiastic newbie skateboarder. Here’s why those two skill sets go together.

    Three years ago, Gene Kim taught himself how to skateboard so he could spend more time with his three sons. He’s proud of being a “40-something” skateboarder, and he’s even conquered a RipStik caster board as well. But “don’t overstate how good I am,” he said emphatically.
    That neatly sums up his relationship with DevOps, too. Kim, a writer and speaker, is perhaps the best known face — and voice — associated with the popular software development methodology, but he’s completely reluctant to declare any kind of ownership over the DevOps journey. “I’m uncomfortable with the credit,” he acknowledged. “It goes against everything I believe.”
    To be fair, Kim didn’t actually invent the term DevOps, which was coined by Patrick Debois (the so-called DevOps Godfather) in June 2009 at a Velocity conference. But in 2010, at DevOps Days in Mountain View, Calif., Kim heard John Willis say IT operations was lost at sea, and DevOps was the lighthouse to bring it home. Something clicked at that moment, and it started Kim on his career as a “tireless cheerleader” for the DevOps journey.
    Enjoying a moment with his children, Gene Kim believes skateboarding, much like DevOps, can at times be ‘terrifying.’
    The seeds were sown much earlier, however. Although trained as a developer, Kim was working on the operations side in the early 2000s, and things began to get ugly. “Starting in 2004,” he explained, “I really started to see organizations where there was a downward spiral on the operations side. And eventually it worked its way back to the development side too.” At one company in particular, he saw a deployment take six weeks, requiring 1,300 steps and as many as 400 people involved. The process was so arduous and difficult it required a “rehearsal” deployment, and then time to recover from the rehearsal. “You can’t see something like that and then unsee it,” he noted. And things just kept getting worse when it came to software development. As time went on, more operations departments were forced to pay penalties to customers, outsourcing was increasing and unhappiness was rampant. “‘Why are we doing this?’ I asked myself at the time. This is insanity.”
    Enter the DevOps journey and Kim’s career as a writer and speaker and, as he calls it, “relentless” evangelist. “John (Willis) making that statement about DevOps was an ‘aha! moment,'” Kim said. “It was the thing I’d been looking for, for 13 years.” Kim spends his days “hanging out with the best in the game. I shadow them and see how they think and how they work. There is nothing as fun as that.”
    Kim also has a lot of fun with the language he uses around the DevOps journey. In an acronym-filled industry, Kim reaches for actionable adjectives that perhaps no one but he would use to describe a software development platform: inevitable, inexorable, remorseless. And while he admits to being entertained by those words — particularly remorseless — he’s completely serious about it, too. “When it comes to DevOps, there is no place to hide. It doesn’t matter if you’re a developer, a QA or an [information security] person because the advantages of working this different way are so clear that it will influence how you work and whoever you are.”
    In Kim’s view, there aren’t any exceptions, either. “If you do find a place to hide [from DevOps], it might be in a pocket of irrelevance,” he said flatly. The DevOps journey, for Kim, is the opposite of friction and distraction; it’s joy and focus, he said. “That’s what DevOps is like. Life is so much better. We don’t want latency and toil.”
    What they do want, Kim argued, is disruption of the right kind, where technology collides with business but in a way that is mutually respectful. The trouble is most organizations aren’t anywhere near there today. “So the problem is that in so many organizations IT is viewed as a second-class citizen,” he said. “It’s heartbreaking to hear. Tech teams do demos for the organization, and the business stakeholder doesn’t show up. The product manager doesn’t show up. It makes me so angry. If it’s not important enough to see what we built for you, if you’re too busy to . attend, then you should give up your engineering cycles to someone else who cares enough, who would show up.”
    That’s the biggest problem with the ongoing DevOps journey today. It will take time to overcome, but at some point, companies will arrive at BizDevOps where the tech stakeholder and the business stakeholder work together to get things done. “We’ve got to eliminate the apathy that exists today,” he said.
    Gene Kim

    Founder and former CTO of software maker Tripwire Inc.
    Wrote the original version of Tripwire — “one of the most widely used intrusion-detection tools for Unix” — when he was an undergraduate student at Purdue University in 1992.

    Awarded the 2009 Purdue University Outstanding Alumnus; 2007 ComputerWorld “40 Innovative IT People Under the Age Of 40”; 2004 InfoWorld “Top Up and Coming CTOs to Watch”; and 2001 Portland Business Journal “Top 40 Under 40.”

    Authored The Phoenix Project: A Novel About IT, DevOps, and Helping Your Business Win and The Visible Ops Handbook.

    Unabashed DevOps evangelist and skateboarder.

    Meantime, Kim will continue to lead, or at least advocate, by example, with the hope that his enthusiasm is contagious. And he lives his message of change and growth by avidly pursuing every learning opportunity available to him. Kim, who has a graduate degree in compiler design, recently taught himself Clojure, a Java virtual machine programming language.
    “I’ve had more fun with programming lately than I’ve had in the last decade,” he said. “It’s one of the hardest I’ve learned but also the most rewarding. I’m learning more things than I’ve learned in a very long time.” And there’s still time for playing the guitar, taking out the RipStik and, of course, skateboarding, which perhaps might represent the ultimate DevOps metaphor: “It’s sometimes terrifying, but I wished I’d picked it up earlier in my life.”
    Gene kim devops

    Gene kim devops
    [youtube]
    Gene kim devops Top stories today Gene kim devops
    Gene kim devops
    DevOps is tricky, but if done right, it can greatly improve the software development process. Advocate Gene Kim explains why the DevOps journey is worth taking.
    Gene kim devops
    Gene kim devops Gene kim devops Gene kim devops
    SOURCE: Gene kim devops Gene kim devops Gene kim devops
    #tags#[replace: -,-Gene kim devops] Gene kim devops#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]daily news[/url]

    3 years ago Reply

  44. We help you stay in comfort while you take care of your business
    NewOrleansjala

    [b]Эдуард Кабринский – Azure devops link – Кабринский Эдуард

    Azure devops link
    [youtube]
    Azure devops link National news stories Azure devops link
    It’s not the same without you
    Join the community to find out what other Atlassian users are discussing, debating and creating.

    Community
    Products
    Sourcetree
    Questions
    Azure DevOps with Sourcetree – Adding a remote account

    Azure DevOps with Sourcetree – Adding a remote account Edited
    I’m trying to add my Azure DevOps remote account to the list of accounts I have in Sourcetree same way as other providers:
    However, nothing I do seems to work when I try to add a new one (with Azure DevOps selected as Hosting Service):

    I tried with the email and account I use to log in DevOps which greets me with VS30063 Not authorized (I have full admin rights).
    I’ve generated a PAT inside DevOps and used it as a password to the account (as per instruction) which results in: Failed to check login for *user*. Insufficient authentication credentials. Sourcetree could not find password for *user* at *DevOps link* .
    I’ve also tried using the PAT as a username (and still providing the account password) with the same result as above.

    What exactly is the procedure for connecting the two, and what exactly is SourceTree expecting as an input for an Azure DevOps account?
    5 answers
    1 accepted
    Apparently (at the time of writing 23.04.2019), when writing the host URL rather than writing:
    one should still write it in the old VSTS link format (even if the organization has been made on DevOps):
    When using this as host URL, and then using the account name used to log in DevOps as username and the PAT as password, it’s working.
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    This is still the case in the SourceTree 3.2.0 Windows beta that is only a few days old. When trying to authorize with https://dev.azure.com/OrgName, it seems to succeed in logging in, but fail at gaining Git permissions. Swapping to https://OrgName.VisualStudio.com works immediately. Remotes will work fine with the dev.azure.com format after this initial account authorization.
    Any remote repository added using the old Generic Account / Generic Host method (3.1.3 on Windows still had no DevOps integration) should still work if you didn’t regenerate the token those remotes use. If, like me, you regenerated your token you use for SourceTree so you could get the password, you will need to edit the DevOps remotes and select the new DevOps Host entry instead of the old Generic Host.
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.

    After trying an failing many times, this worked for me as well!
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    same for me . dev.azure.com link DON’t work 🙁
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    This worked for me as well with the Personal Access Token.
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    Ok, so I fought with this for the entire afternoon until I figured out (a) where to find my “OrgName”, and (b) specifically what to enter for the URL.
    In the Azure DevOps portal, it tells me that my repository is at a URL like this:
    https:// MyCompanyName [at]dev.azure.com/
    So, the URL that worked for me was:
    https:// MyCompanyName .visualstudio.com
    The part that had me messed up is that it doesn’t want anything other than this “base” URL (subdomain dot host). Don’t include any of the navigation stuff after the slash.
    You must be a registered user to add a comment. If you’ve already registered, sign in. Otherwise, register and sign in.
    You will try, and think you are not successful, but you are really near from it ! follow with me:
    I am using SourceTree for Windows version 3.2.6
    1- Tools -> Options -> Authentication
    2- Remove all Visual Studio (or DevOps). Click Ok.
    3- Close SouceTree completely. I closed Visual Studio as well, just in case !
    3.5 – I switched to DevOps format from Organization settings in DevOps website. So, if you want to follow exactly what I did, do it. Currently it is possible to return back to old format xxx.visualstudio.com. It is your decision !
    4- Open SourceTree, go again to Tools -> Options -> Authentication.
    6- Prepare your new Personal Access Token, then click “Refresh Personal Access Token” button. Ensure you have this token saved somewhere TEMPORARILY because we will need it.
    7- Enter your email as username, and the just generated PAT as password.
    It will tell you it failed, do not worry it did not !
    8- Click Ok then Close SourceTree Completely.
    9- Remove the password cache file called “passwd” in “C:\Users\\AppData\Local\Atlassian\SourceTree”.
    10- Open sourcetree again. You can go again to Authentication of SourceTree and see your account has actually been added !
    11- Ensure that your repository setting of your git is correctly formatted (https://dev.azure.com/YourOrgName/Project/_git/. )
    12- You will notice a new password window shows up asking for password, Enter the same Token which you used it earlier. Note that this password will be cached. You might get the same window when you Fetch anther repository. That is why we saved the token temporarily.
    13- Fetch your repos, it should work now. Congratulations !
    14- Do not forget to remove the TEMPORARILY saved token (if you saved it somewhere) which can be stolen and used to access your account. I mean that copy-pasted token.
    Azure devops link

    Azure devops link
    [youtube]
    Azure devops link Latest news headlines for today Azure devops link
    Azure devops link
    Solved: I’m trying to add my Azure DevOps remote account to the list of accounts I have in Sourcetree same way as other providers: However, nothing I
    Azure devops link
    Azure devops link Azure devops link Azure devops link
    SOURCE: Azure devops link Azure devops link Azure devops link
    #tags#[replace: -,-Azure devops link] Azure devops link#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

  45. We help you stay in comfort while you take care of your business
    RENTjala

    [b]Эдуард Кабринский – Azure pipelines docker compose – Кабринский Эдуард

    Azure pipelines docker compose
    [youtube]
    Azure pipelines docker compose Today news live Azure pipelines docker compose
    Deploy to Azure Container Instances with Docker Desktop

    Posted on June 25, 2020
    This blog was co-authored by MacKenzie Olson, Program Manager, Azure Container Instances.
    Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure. Last month Microsoft and Docker announced this collaboration, and today you can experience it for yourself.
    The new edge release of Docker Desktop provides an integration between Docker and Microsoft Azure that enables you to use native Docker commands to run your applications as serverless containers with Azure Container Instances.
    You can use the Docker CLI to quickly and easily sign into Azure, create a Container Instances context using an Azure subscription and resource group, then run your single-container applications on Container Instances using docker run . You can also deploy multi-container applications to Container Instances that are defined in a Docker Compose file using docker compose up .
    Code-to-Cloud with serverless containers
    Azure Container Instances is a great solution for running a single Docker container or an application comprised of multiple containers defined with a Docker Compose file. With Container Instances, you can run your containers in the cloud without needing to set up any infrastructure and take advantage of features such as mounting Azure Storage and GitHub repositories as volumes. Because there is no infrastructure or platform management overhead, Container Instances caters to those who need to quickly run containers in the cloud.
    Container Instances is also a good target to run the same workloads in production. In production cases, we recommend leveraging Docker commands inside of an automated CI/CD flow. This saves time having to rewrite configuration files because the same Dockerfile and Docker Compose files can be deployed to production with tools such as GitHub Actions. Container Instances also has a pay-as-you-go pricing model, which means you will only be billed for CPU and memory consumption per second, only when the container is running.
    Let’s look at the new Docker Azure integration using an example. We have a worker container that continually pulls orders off a queue and performs necessary order processing. Here are the steps to run this in Container Instances with native Docker commands:

    Run a single container
    As you can see from the above animation, the new Docker CLI integration with Azure makes it easy to get a container running in Azure Container Instances. Using only the Docker CLI you can log in to Azure with multi-factor authentication and create a Docker context using Container Instances as the backend. Detailed information on Container Instances contexts can be found in the documentation.
    Once the new Container Instances context is created it can be used to target Container Instances with many of the standard Docker commands you likely already use; like docker run, docker ps, and docker rm. Running a simple docker run command will start a container in Container Instances using the image that is stored in a registry like Docker Hub or Azure Container Registry. You can run other common Docker commands to inspect, attach-to, and view logs from the running container.
    Use Docker Compose to deploy a multi-container app
    We see many containerized applications that consist of a few related containers. Sidecar containers often perform logging or signing services for the main container. With the new Docker Azure integration, you can use Docker Compose to describe these multi-container applications.
    You can use a Container Instances context and a Docker Compose file as part of your edit-build-debug inner loop, as well as your CI/CD flows. This enables you to use docker compose up and down commands to spin up or shut down multiple containers at once in Container Instances.
    Visual Studio Code for an even better experience
    The Visual Studio Code Docker extension provides you with an integrated experience to start, stop, and manage your containers, images, contexts, and more. Use the extension to scaffold Dockerfiles and Docker Compose files for any language. For Node.js, Python, and .NET, you get integrated, one-click debugging of your app inside the container. And then of course there is the Explorer, which has multiple panels that make the management of your Docker objects easy from right inside Visual Studio Code.
    Use the Containers panel to list, start, stop, inspect, view logs, and more.

    From the Images panel you can list, pull, tag, and push your images.
    Connect to Azure Container Registry and Docker Hub in the Registries panel to view and manage your images in the cloud. You can even deploy straight to Azure.

    The Contexts panel lets you list all your contexts and quickly switch between them. When you switch context, the other panels will refresh to show the Docker objects from the selected context. Container Instances contexts will be fully supported in the next release of the docker extension.

    Try it out
    To start using the Docker Azure integration install the Docker Desktop edge release. You can leverage the current Visual Studio Code Docker extension today, Container Instances context support will be added very soon.
    To learn more about the Docker Desktop release, you can read this blog post from Docker. You can find more information in the documentation for using Docker Container Instances contexts.
    Azure pipelines docker compose

    Azure pipelines docker compose
    [youtube]
    Azure pipelines docker compose Recent news headlines Azure pipelines docker compose
    Azure pipelines docker compose
    Today we’re excited about the first release of the new Docker Desktop integration with Microsoft Azure.
    Azure pipelines docker compose
    Azure pipelines docker compose Azure pipelines docker compose Azure pipelines docker compose
    SOURCE: Azure pipelines docker compose Azure pipelines docker compose Azure pipelines docker compose
    #tags#[replace: -,-Azure pipelines docker compose] Azure pipelines docker compose#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]latest news[/url]

    3 years ago Reply

    1. We help you stay in comfort while you take care of your business
  46. We help you stay in comfort while you take care of your business
    LasVegasjala

    [b]Eduard Kabrinskiy – Azure devops msdn – Кабринский Эдуард

    Azure devops msdn
    [youtube]
    Azure devops msdn National news stories Azure devops msdn
    Azure DevOps
    Plan smarter, collaborate better, and ship faster with a set of modern dev services.
    Already have an account?

    Azure Boards
    Deliver value to your users faster using proven agile tools to plan, track, and discuss work across your teams.

    Azure Pipelines
    Build, test, and deploy with CI/CD that works with any language, platform, and cloud. Connect to GitHub or any other Git provider and deploy continuously.

    Azure Repos
    Get unlimited, cloud-hosted private Git repos and collaborate to build better code with pull requests and advanced file management.

    Azure Test Plans
    Test and ship with confidence using manual and exploratory testing tools.

    Azure Artifacts
    Create, host, and share packages with your team, and add artifacts to your CI/CD pipelines with a single click.
    Extensions Marketplace
    Access extensions from Slack to SonarCloud to 1,000 other apps and services—built by the community.
    Use all the DevOps services or choose just what you need to complement your existing workflows

    Azure Boards
    Agile planning tools
    Track work with configurable Kanban boards, interactive backlogs, and powerful planning tools. Unparalleled traceability and reporting make Boards the perfect home for all your ideas—big and small.

    Azure Pipelines
    CI/CD for any platform
    Build, test, and deploy in any language, to any cloud—or on-premises. Run in parallel on Linux, macOS, and Windows, and deploy containers to individual hosts or Kubernetes.

    Azure Repos
    Unlimited free private repos
    Get flexible, powerful Git hosting with effective code reviews and unlimited free repositories for all your ideas—from a one-person project to the world’s largest repository.

    Azure Test Plans
    Manual and exploratory testing
    Test often and release with confidence. Improve your overall code quality with manual and exploratory testing tools for your apps.

    Azure Artifacts
    Universal package repository
    Share Maven, npm, NuGet, and Python packages from public and private sources with your entire team. Integrate package sharing into your CI/CD pipelines in a way that’s simple and scalable.

    Extensions Marketplace
    Access 1,000+ extensions or create your own.
    See how customers are using Azure DevOps

    Chevron accelerates its move to the cloud, sharpens competitive edge with SAFeВ® built on Azure DevOps.

    Pioneering insurance model automatically pays travelers for delayed flights.

    Digital transformation in DevOps is a “game-changer”.

    Axonize uses Azure to build and support a flexible, easy-to-deploy IoT platform.

    Cargill builds a more fertile and secure platform for innovation in the public cloud.

    Learn how to scale DevOps practices throughout your organization
    Read Enterprise DevOps Report 2020-2021 to learn how top-performing organizations have implemented DevOps across their businesses.

    Optimize remote developer team productivity with these 7 tips
    Find out how to empower your distributed development team with the right tools, skills, and culture for remote development.
    Azure DevOps
    Choose Azure DevOps for enterprise-grade reliability, including a 99.9 percent SLA and 24?7 support. Get new features every three weeks.
    On-Premises
    Manage your own secure, on-premises environment with Azure DevOps Server. Get source code management, automated builds, requirements management, reporting, and more.
    See how teams across Microsoft adopted a DevOps culture
    Get started with Azure DevOps
    Easily set up automated pipelines to build, test, and deploy your code to any platform.
    Azure devops msdn

    Azure devops msdn
    [youtube]
    Azure devops msdn Recent news headlines Azure devops msdn
    Azure devops msdn
    Plan smarter, collaborate better, and ship faster with Azure DevOps Services, formerly known as Visual Studio Team Services. Get agile tools, CI/CD, and more.
    Azure devops msdn
    Azure devops msdn Azure devops msdn Azure devops msdn
    SOURCE: Azure devops msdn Azure devops msdn Azure devops msdn
    #tags#[replace: -,-Azure devops msdn] Azure devops msdn#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]daily news[/url]

    3 years ago Reply

  47. We help you stay in comfort while you take care of your business
    Lancasterjala

    [b]Эдуард Кабринский – Devops infrastructure management – Эдуард Кабринский

    Devops infrastructure management
    [youtube]
    Devops infrastructure management Top news Devops infrastructure management
    5 DevOps infrastructure challenges—and how to overcome them

    As DevOps continues to be adopted by companies, many of them are finding that a variety of roadblocks—including infrastructure—are hindering their progress.
    Nearly three-quarters of development teams spend a significant amount of time—anywhere from 10% to 50% of their workweek—updating and upgrading software, according to a study by Atlassian, a provider of DevOps tools and services. On average, companies struggle with visibility into the status of their applications, requiring an average of 3.3 tools to determine status.
    The result is that companies, though they want to move fast, are getting bogged down. They already have the technical debt of legacy technology and are finding themselves adopting new technologies that need to be understood, configured, and optimized, says Ian Buchanan, solutions engineer at Atlassian.

    “All of that technology and infrastructure are a kind of weight, a technical debt, and we should not overlook that people have to pay that down.”—Ian Buchanan

    This technical debt is a significant blocker for many companies, preventing them from creating more mature and efficient development and deployment cycles. Using a measure of software delivery and operational (SDO) performance, the DevOps Research and Assessment (DORA) group found that about 48% of DevOps-focused companies are classified as high performers, while 37% are medium performers and 15% are low performers, according to the 2018 State of DevOps Report.
    While low-performing companies initially make significant gains by automating development, medium-performing companies are slowed down by adding additional infrastructure to speed testing and deployment, preventing them from crossing the threshold into the high-performance category.
    The difference between doing DevOps right and doing it wrong is stark. Elite performers—a subset, about 7%, of high performers—are able to deploy code more then 2,500 times faster from the time they commit, compared to the low performers, and have a change failure rate that is seven times lower, according to the DORA group, which is now part of Google.
    There is no easy path to the high-performing category. It takes time and effort, say experts. Here are five areas your company should focus on.
    1. Emphasize culture
    Repeatedly, DevOps experts emphasize that having the right culture is key to making DevOps work. While deploying and optimizing infrastructure can slow down development and operations, not having the right mindset among your teams can cause a project to crash, says Jayne Groll, CEO of the DevOps Institute.
    “You need a culture of innovation, a culture of disruption, and, most importantly, a culture of continuous learning,” she said. “None of the major case studies in DevOps success find that a company succeeds because they have a great pipeline—it’s because they did something that was very, very people-focused. They kept the humanity in DevOps.”
    CloudBees, a provider of continuous delivery services, agrees. A company with the right culture can overcome any and all infrastructure and skill issues, says Brian Dawson, DevOps evangelist with the company.

    “Skills and training bleed into culture. Knowing how to implement configuration and code really doesn’t matter until you are in the mindset that is required to work cross-functionally, collaboratively, and in a way where you are likely to be experimenting, failing, and then trying again.”—Brian Dawson

    2. Soft skills are as important as technical
    In its latest report, the DevOps Institute found that companies hiring for DevOps positions prioritized soft skills—such as collaboration, problem-solving ability, and interpersonal skills—as often as they prioritized technical skills.

    “Scale requires skills, and that skill requires people. There really was kind of a clear recognition that the higher-level skills were the problem-solving skills and the critical thinking skills.”—Jayne Groll

    While having employees who are willing to collaborate, learn, and jump into any problem is important, companies also emphasized automation skills, especially information-technology operations and infrastructure knowledge on the technical side, and software development, source control, and analysis on the process side.
    Yet most companies will not often find an exact match for their requirements, so finding a strong cultural and interpersonal match is often better than an exact technical match.
    3. Automation is key (of course)
    The key to DevOps is automation, and the most important skills and infrastructure are often focused on eking out more efficiency from the software development and deployment cycle. This is supported by the latest studies as well: Elite and high performers do less manual work, according to the DORA study.
    Infrastructure for automation will need constant improvements and tweaking of policy to eliminate blockers. There are three attributes of good automation, says CloudBees’ Dawson.
    First, companies need the ability to access their process quickly. Second, they need reproducibility between what a real production environment looks like versus the pre-production environment.

    “How much am I able to ensure that my environment stays the same and doesn’t drift too much, and if I’m creating separate instances of it, that they are not different from each other?”—Brian Dawson

    Finally, the process needs to be resilient to failure, he says.
    4. Manage change with faster decisions
    The process and infrastructure for making change to an application have also become a major source of inefficiency for companies, says the DevOps Institute’s Groll.

    “We talk about continuous deployment and continuous delivery, but most organizations want change announcements two weeks in advance. So there is a disconnect there between what we say we want and the reality.”—Jayne Groll

    Typically, companies classify changes in one of three categories: Standard changes, normal changes, and emergency fixes. Normal is the typical process for making changes, while standard changes are ones that can be made without pre-approval or, at least, expedited. With automation in DevOps, however, far more changes should be able to be made quicker, says Atlassian’s Buchanan.

    “Standard is the new normal. With all of this automation going on, the standard changes—pre-approved, well-understood, low-risk, well-specified—can be made because we know how those processes work and we don’t have to worry that there is a lot of risk.”—Ian Buchanan

    5. Plan to scale up
    As the DORA study showed, there is a hurdle for most companies when they begin to scale up their DevOps infrastructure from the low-hanging fruit of quick automation to the more difficult-to-achieve infrastructure projects. Often, when people want to stand up DevOps in their organization, they create another functional team, says CloudBees’ Brian Dawson.

    “You are in danger of creating another silo. I don’t think the sheer number of choices in terms of infrastructure technology is the problem. I think it is how people make the choices.”—Brian Dawson

    Rather, companies need to adopt infrastructure that allows them to quickly scale up their operations. Organizations that significantly used cloud infrastructure, for example, were 23 times more likely to be high performers in the DORA study.
    How fast do companies need to move? One restaurant chain created an application for online ordering late last year, and within six months, orders through the application accounted for 15% of the companies revenues, says Zane Lackey, chief technology officer and co-founder of Signal Science, an application security provider.
    In the end, DevOps infrastructure needs to serve the company, he says.

    “You can’t have a technology for only one style of how a customer deploys applications. Companies want to lift and shift some applications to the cloud, others they want on-prem, and everything in between. You need to meet them anywhere where they are deploying their apps.”—Zane Lackey

    Keep learning
    Continuous delivery and release automation (CDRA) demands speed and quality. Find out how in TechBeacon’s guide to CDRA.
    Get up to speed on quality-driven development with TechBeacon’s guide.
    Devops infrastructure management

    Devops infrastructure management
    [youtube]
    Devops infrastructure management Online news Devops infrastructure management
    Devops infrastructure management
    Making the wrong DevOps infrastructure choices can slow down development. Here’s how to get the right mix. Hint: Open culture and strong skills count.
    Devops infrastructure management
    Devops infrastructure management Devops infrastructure management Devops infrastructure management
    SOURCE: Devops infrastructure management Devops infrastructure management Devops infrastructure management
    #tags#[replace: -,-Devops infrastructure management] Devops infrastructure management#tags#[/b]
    [b]Кабринский Эдуард[/b]
    [url=http://remmont.com]headline news[/url]

    3 years ago Reply

  48. We help you stay in comfort while you take care of your business
    Lowelljala

    [b]Кабринский Эдуард – Devops at microsoft – Eduard Kabrinskiy

    Devops at microsoft
    [youtube]
    Devops at microsoft Main news today Devops at microsoft
    Microsoft Teams for DevOps

    Microsoft Teams and the principles of DevOps go together like toast and jam . But have you ever considered what makes them such a good match?
    First, what do we mean by DevOps? N ot to be confused with the tool “ Azure DevOps ,” DevOps is defined as “ a set of software development practices that combines software development ( Dev ) and information technology operations ( Ops ) to shorten the systems development life cycle while delivering features, fixes, and updates frequently in close alignment with business objectives.
    Clearly , a key component to successfully implementing a DevOps methodology is ensuring that various disciplines can collaborate effectively together. And Microsoft Teams, meanwhile, is all about enabling collaboration.
    Microsoft Teams is the relatively new kid on the block in Office 365 . It is the hub for team collaboration that brings together people, content and tools to ensure your team are focused, engaged and effective. In addition, external users (such as clients ) can be added to a Team for open and transparent collaboration .
    Why Teams and not e-mail? Each has its place but there are a number of reasons to choose Teams for project collaboration . For instance, when working on a project, all information is at hand without having to search back through e-mail trails. E-mail can also be distracting or not relevant to the current project . And, w ith Teams , all conversations can be open for other members to join in or to see what their colleagues are working on .
    At New Signature, for example, w e encourage our team members to have open communications regarding a project within a Microsoft Teams conversation rather than one-on-one . This way the client can see the work that’s being done, providing a transparent view that is in line with the values of our company.
    So, this takes us back to how Microsoft Teams is a good match for the principles of DevOps : it enables all-important co llaboration. Here’s how it works: A “ team ” can be created within Microsoft Teams , including the relevant people involved in a specific DevOps project . Within a T eam there is a concept of “ channels ,” where each channel is created for a specific topic within that Team. A good methodology to follow for DevOps projects is to create a channel for each of your stages, for example Analyse, Design, Develop, Test, and Operate , in additional to the “ General ” channel that is created by default .

    This means that conversations around analysing the project’s requirements can be done in the Analyse channel, designing or architecting the solution can be done in the Design channel, development discussions in the Develop channel, discussions around quality and testing in the Test channel, and discussions around releases and operations go in the Operate channel.
    Within each channel you can also add tabs that integrate to other tools. For example, by default, you will get a Conversation and a Files tab for each channel. The Files tab links to a folder in a document library that is part of a SharePoint site that is created for the Team when the Team itself is created.
    You can also add additional tabs to each channel that integrate to other tool sets. Y ou could create a Wiki for each channel , for instance .
    Some typical tabs for the General tab could be :

    Conversations: Default tab created for conversations relating to the general project.
    Files: Default tab for storing documentation relating to the general project.
    Sprint : A link to the sprint backlog in Microsoft DevOps (opens embedded within Teams)
    Backlog : A link to the full backlog in Microsoft DevOps (opens embedded within Teams)
    Features : A link to the Feature board in Microsoft DevOps (opens embedded within Teams)
    Project dashboard : A link to a dynamic Microsoft DevOps dashboard that is used with the client to confirm the project delivery progress

    Some typical tabs for the Prep & Design channel could be:

    Conversations: Default tab created for conversations relating to projectprep &design.
    Files : Default tab for storing documentation relating to the projectprep &design.
    UX/UI : A link to a location to where the UX/UI designs are stored (for example, if they are stored in InVision, Framer, or Figma to allow feedback to designs and wireframes)

    Some typical tabs in the Engineer channel could be:

    Conversations: Default tab created for conversations relating to project testing
    Files : Default tab for storing documentation relating to the projectdevelopment
    Wiki : A wiki for relevant developer information
    Engineering dashboard: A link to a dynamic Microsoft DevOps dashboard that includes items such as the curre nt bug cap, the current sprint, build and release pipeline details and speed of delivery APIs.

    Some typical tabs in the Test channel could be:

    Conversations: Default tab created for conversations relating to project testing
    Files : Default tab for storing documentation relating to the project testing
    Quality dashboard : A link to a dynamic Microsoft DevOps dashboard that is used to report on the current “bug cap” (a figure to ensure that raised bugs never exceeds a defined amount or new feature development is halted), orphaned requirements, unit tests, test plans, and the l ike

    Some typical tabs in the Operate channel could be:

    Conversations: Default tab created for conversations relating to projectoperations
    Files : Default tab for storing documentation relating to the projectoperations
    Customer service : A link to your customer service application (for example, embedding a view of the tickets in Zendesk or S ervice N ow that relate to this project)
    Release dashboard : A link to a dynamic report in Microsoft DevOps that shows the details of the operational environment (most recent release details, performance, costs, etc. )

    There are also several integrations that can be done between Microsoft DevOps and Microsoft Teams , as follows:

    You can embed Microsoft DevOps dashboards into a Microsoft Team channel as a tab
    You can embed Microsoft DevOps Kanban boards into a Microsoft Team channel as a tab
    You can create a connector (using service hooks) between Microsoft DevOps and T eams to send notifications on various events in your projects, such as:
    Configure comments in DevOps with “ #Teams ,” which can also be posted to the Develop channel
    Configure a notification to the Operate channel when a release is completed
    Configure a notification to the Develop channel when a pull request is created
    Configure a notification to the Develop channel when a build is completed

    Further integration and automation in DevOps can also be achieved with Microsoft Teams, combined with PowerApps, Microsoft Forms, Flow and the Microsoft Bot Framework , to make a digital assistant part of your team – answering questions, providing notifications, and more . For more information, see Add bots to Microsoft Teams apps .
    Microsoft Teams also allows you to integrate PowerApps and Forms for simple collection of information from team members. One example would be a form to collect Feature suggestions that prompt the user for relevant information to create a Feature in Microsoft DevOps without users having to go into a more developer – orientated tool such as DevOps.
    In addition to integrations, t eams can hold meetings using Microsoft Teams. These meetings can be joined by internal and external individuals , even if they don’t have a Teams account . In addition, all information from the meeting is retained within the Team for future recollection. Meetings within Teams has some great features such as video conferencing, blurring of backgrounds, screen-sharing, white-boarding, meeting recording, and transcribing. For more information see 9 tips for meeting with Microsoft Teams .
    To find out more about how your organisation can leverage its investment in Office 365 and use Microsoft Teams as part of its tool set to deliver DevOps principles, get in touch .
    And feel free to share any insights about h ow you are using Microsoft Teams and Microsoft DevOps together .
    Ben Weeks is a solutions lead, applied innovation and DevOps consultant at New Signature helping Enterprises move to the cloud and adopt agile ways of working.
    Devops at microsoft

    Devops at microsoft
    [youtube]
    Devops at microsoft World news online Devops at microsoft
    Devops at microsoft
    Microsoft Teams and the principles of DevOps go together like toast and jam. But have you ever considered what makes them such a good match?
    Devops at microsoft
    Devops at microsoft Devops at microsoft Devops at microsoft
    SOURCE: Devops at microsoft Devops at microsoft Devops at microsoft
    #tags#[replace: -,-Devops at microsoft] Devops at microsoft#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

  49. We help you stay in comfort while you take care of your business
    Elginjala

    [b]Эдуард Кабринский – Azure devops web deploy – Eduard Kabrinskiy

    Azure devops web deploy
    [youtube]
    Azure devops web deploy Current news today Azure devops web deploy
    Scott Hanselman
    Setting up Azure DevOps CI/CD for a .NET Core 3.1 Web App hosted in Azure App Service for Linux
    Following up on my post last week on moving from App Service on Windows to App Service on Linux, I wanted to make sure I had a clean CI/CD (Continuous Integration/Continuous Deployment) pipeline for all my sites. I’m using Azure DevOps because it’s basically free. You get 1800 build minutes a month FREE and I’m not even close to using it with three occasionally-updated sites building on it.

    Last Post: I updated one of my websites from ASP.NET Core 2.2 to the latest LTS (Long Term Support) version of ASP.NET Core 3.1 this week. I want to do the same with my podcast site AND move it to Linux at the same time. Azure App Service for Linux has some very good pricing and allowed me to move over to a Premium v2 plan from Standard which gives me double the memory at 35% off.

    Setting up on Azure DevOps is easy and just like signing up for Azure you’ll use your Microsoft ID. Mine is my gmail/gsuite, in fact. You can also login with GitHub creds. It’s also nice if your project makes NuGet packages as there’s an integrated NuGet Server that others can consume libraries from downstream before (if) you publish them publicly.

    I set up one of my sites with Azure DevOps a while back in about an hour using their visual drag and drop Pipeline system which looked like this:

    There’s some controversy as some folks REALLY like the “classic” pipeline while others like the YAML (Yet Another Markup Language, IMHO) style. YAML doesn’t have all the features of the original pipeline yet, but it’s close. It’s primary advantage is that the pipeline definition exists as a single .YAML file and can be checked-in with your source code. That way someone (you, whomever) could import your GitHub or DevOps Git repository and it includes everything it needs to build and optionally deploy the app.
    The Azure DevOps team is one of the most organized and transparent teams with a published roadmap that’s super detailed and they announce their sprint numbers in the app itself as it’s updated which is pretty cool.
    When YAML includes a nice visual interface on top of it, it’ll be time for everyone to jump but regardless I wanted to make my sites more self-contained. I may try using GitHub Actions at some point and comparing them as well.
    Migrating from Classic Pipelines to YAML Pipelines
    If you have one, you can go to an existing pipeline in DevOps and click View YAML and get some YAML that will get you most of the way there but often includes some missing context or variables. The resulting YAML in my opinion isn’t going to be as clean as what you can do from scratch, but it’s worth looking at.
    In decided to disable/pause my original pipeline and make a new one in parallel. Then I opened them side by side and recreated it. This let me learn more and the result ended up cleaner than I’d expected.

    The YAML editor has a half-assed (sorry) visual designer on the right that basically has Tasks that will write a little chunk of YAML for you, but:

    Once it’s placed you’re on your own
    You can’t edit it or modify it visually. It’s text now.

    If your cursor has the insert point in the wrong place it’ll mess up your YAML
    It’s not smart

    But it does provide a catalog of options and it does jumpstart things. Here’s my YAML to build and publish a zip file (artifact) of my podcast site. Note that my podcast site is three projects, the site, a utility library, and some tests. I found these docs useful for building ASP.NET Core apps.

    You’ll see it triggers builds on the main branch. “Main” is the name of my primary GitHub branch. Yours likely differs.
    It uses Ubuntu to do the build and it builds in Release mode. II
    I install the .NET 3.1.x SDK for building my app, and I build it, then run the tests based on a globbing *tests pattern.
    I do a self-contained publish using -r linux-x64 because I know my target App Service is Linux (it’s cheaper) and it goes to the ArtifactStagingDirectory and I name it “hanselminutes.” At this point it’s a zip file in a folder in the sky.

    Next I move to the release pipeline. Now, you can also do the actual Azure Publish to a Web App/App Service from a YAML Build Pipeline. I suppose that’s fine if your site/project is simple. I wanted to have dev/test/staging so I have a separate Release Pipeline.
    The Release Pipelines system in Azure DevOps can pull an “Artifact” from anywhere – GitHub, DevOps itself natch, Jenkins, Docker Hub, whatever. I set mine up with a Continuous Deployment Trigger that makes a new release every time a build is available. I could also do Releases manually, with specific tags, scheduled, or gated if I’d liked.

    Mine is super easy since it’s just a website. It’s got a single task in the Release Pipeline that does an Azure App Service Deploy. I can also deploy to a slot like Staging, then check it out, and then swap to Production later.
    There’s nice integration between Azure DevOps and the Azure Portal so I can see within Azure in the Deployment Center of my App Service that my deployments are working:

    I’ve found this all to be a good use of my staycation and even though I’m just a one-person company I’ve been able to get a very nice automated build system set up at very low cost (GitHub free account for a private repo, 1800 free Azure DevOps minutes, and an App Service for Linux plan) A basic starts at $13 with 1.75Gb of RAM but I’m planning on moving all my sites over to a single big P1v2 with 3.5G of RAM and an SSD for around $80 a month. That should get all of my
    20 sites under one roof for a price/perf I can handle.
    Sponsor: Like C#? We do too! That’s why we’ve developed a fast, smart, cross-platform .NET IDE which gives you even more coding power. Clever code analysis, rich code completion, instant search and navigation, an advanced debugger. With JetBrains Rider, everything you need is at your fingertips. Code C# at the speed of thought on Linux, Mac, or Windows. Try JetBrains Rider today!
    About Scott
    Scott Hanselman is a former professor, former Chief Architect in finance, now speaker, consultant, father, diabetic, and Microsoft employee. He is a failed stand-up comic, a cornrower, and a book author.
    Azure devops web deploy

    Azure devops web deploy
    [youtube]
    Azure devops web deploy Current news today Azure devops web deploy
    Azure devops web deploy
    Following up on my post last week on moving from App Service on Windows to App …
    Azure devops web deploy
    Azure devops web deploy Azure devops web deploy Azure devops web deploy
    SOURCE: Azure devops web deploy Azure devops web deploy Azure devops web deploy
    #tags#[replace: -,-Azure devops web deploy] Azure devops web deploy#tags#[/b]
    [b]Eduard Kabrinskiy[/b]
    [url=http://remmont.com]news today[/url]

    3 years ago Reply

  50. We help you stay in comfort while you take care of your business
    Shreveportjala

    [b]Eduard Kabrinskiy – Udemy jenkins – Eduard Kabrinskiy

    Udemy jenkins
    [youtube]
    Udemy jenkins Latest news update Udemy jenkins
    10 Best Udemy Online Courses for Java Developers
    Best Udemy courses to learn Java, Spring Framework, Clean Code, Refactoring, Multithreading, Concurrency, Maven, Docker, Git, and Jenkins

    Hello guys, if you are a Java developer or want to become an expert Java developer and looking for the best Udemy courses then you have come to the right place. Earlier, I have shared the best Udemy courses for software developers, and today, I am going to share the best Udemy courses for Java developers.
    This list includes courses to learn essential Java skills like core Java, Spring Framework, Clean Code, Refactoring, Multithreading, Concurrency, Maven, Docker, Git, and Jenkins. These are must-have skills for Java professionals.
    I have shared the ultimate best Udemy courses on each topic you can pick and learn that topic better. If you think there is another great Udemy course that should be on this list for Java developers feel free to suggest.
    Top 10 Udemy Courses for Java Programmers in 2020
    Here is my list of some of the best courses for Java and Web developers from Udemy. This includes courses on Java, Spring 5, Spring Boot 2, Git, Maven, Jenkins, Docker, REST API, Microservice, and general web development stuff. This should be a good list of skills to learn and update in 2020.
    1. The Complete Java Masterclass
    First thing first, if you are a professional Java developer, you got to learn recent Java versions like Java 9, 10, and 11.
    Even if you don’t use Jigsaw, there are some API enhancement that is worth looking in Java 9 like creating a map or list in JDK 11 is much more comfortable with new factory methods introduced in JDK 9.
    This course was recently updated to cover Java 11 and provides a good overview of all new features of JDK 9, JDK 10, and JDK 11.
    Here is the link to join this course — The Complete Java Masterclass

    2. Spring Framework 5: Beginner to Guru
    This is one of the most modern and comprehensive Spring Framework courses available on Spring framework 5 and Spring Boot 2 at this moment.
    This course is taught by John Thompson, is an authority on Spring world, previously worked for Pivotal customers as a Spring Source consultant, and has also spoken at Spring One.
    If you are serious about learning new features of Spring 5, like. Reactive Programming, then this is the right course for you.

    3. Docker for Java Developers
    As the title suggests, this course is specially designed for Java developers. You will learn how you can use Docker to supercharge your enterprise Java Development.
    You can take this course once you have a basic understanding of what is Docker and what value it provides in terms of software development and deployment.
    Here is the link to join this course —Docker for Java Developers

    4. Master Microservice with Spring Boot and Spring Cloud
    Developing RESTful web services and Microservice is fun, and the combination of Spring Boot, Spring Web MVC, Spring Web Services, and JPA makes it even more fun. In this course, you will learn how to implement microservices using Spring Cloud.
    This course is divided into two parts: RESTful web services and Microservices. In the first part of the course, you will learn the basics of RESTful web services, and the second part is focused on Microservices.

    5. Jenkins, From Zero To Hero: Become a DevOps Jenkins Master
    If you want to take your DevOps skills to the next level in 2020, this is the right course for you. In this course, you will learn how to build an automated continuous integration pipeline with Jenkins.
    You will also understand the fundamental concepts of continuous inspection, continuous integration, and continuous deployment, and the difference between them.

    6. Git Complete: The definitive, step-by-step guide to Git
    Git is the most popular version control system at this moment. It’s different from SVN or CVS in the sense that its a distributed version control system, which means you can commit changes on your local repository, and then you can push that to Github or remote repository at once.
    This course provides a complete overview of Git, including installation, branching, and merging, downloading projects from Github, Rebasing, Stashing, working with Git bash on the command line, and Tagging essential milestones.

    7. Maven Crash Course
    This is for Java developers who don’t know Maven, one of the essential skills for Java Programmers. Ideally, you should have learned it earlier, but if you haven’t, it’s not too late. You can learn Maven using this Crash course in a couple of days.
    Since Maven is the most popular build tool for Java projects and also makes dependency management more comfortable, I strongly recommend every Java developer to learn Maven in 2020, if they haven’t.
    Here is the link to join this course — Maven Crash Course

    8. Docker Mastery: The Complete Toolset From a Docker Captain
    Docker is another essential skill for Java developers to learn in 2020. Docker provides a new way of development and deployment for your mobile and web apps. In the current world of distributed development and implementation, its increasingly become an essential skill.
    This course not only introduces the big picture of Docker but also provides a complete overview of different Docker tools.

    9. Reactive Programming with Spring Framework 5
    Another excellent and advanced course by John Thompson, which particular focus on reactive programming with Spring 5.
    This course provides hands-on experience with building a Reactive application to stream ‘movie events’ leveraging the Reactive data types and WebFlux — both are new features of Spring Framework 5.

    10. Java Multithreading, Concurrency, & Performance Optimization
    Multithreading and Concurrency is one of the most sought after skills for Java developers. There is a high demand for Java developers who understand Multithreading and Concurrency well, but at the same time, this is one of the difficult topics to master.
    If you want to take your Concurrency skills to the next level and want to become an expert in Multithreading, Concurrency, and Parallel programming in Java, with a strong emphasis on high performance, then I highly recommend this course to you.
    There is a reason I kept it in the first position because this is probably the most important course on this list and highly recommend every Java developer to go through it.

    11. Pyramid of Refactoring (Java) — Clean Code Gradually
    This is another advanced level course for Java developers to improve their coding skills. You will learn things like how to write Clean code gradually and notice emerging Design Patterns like Interpreter, Fluent Builder, Factory Methods
    This is the first module of (planned) series called “Pyramid of Refactoring” dedicated to achieving a Clean Code. There are a couple of more courses on this series.
    The instructor uses refactoring techniques and performs all the changes live which helps you to understand dhow refactoring can improve code quality. If you are serious about code quality, then this course will help you a lot.

    That’s all about some of the top Udemy courses for Java and Web Developers. Udemy regularly runs crazy sales where every course is available for just $10, It’s an excellent opportunity to buy the classes you like, and then you can take them at your convenience. Udemy gives you lifetime access to the course you buy, which means you can make it anytime.
    Other Programming Articles you may like
    Thanks for reading this article so far. If you like these Java and Web Development courses from Udemy then please share with your friends. If you have any questions or feedback then please drop a note.
    Udemy jenkins

    Udemy jenkins
    [youtube]
    Udemy jenkins News headlines of the day Udemy jenkins
    Udemy jenkins
    Hello guys, if you are a Java developer or want to become an expert Java developer and looking for the best Udemy courses then you have come to the right place. Earlier, I have shared the best Udemy…
    Udemy jenkins
    Udemy jenkins Udemy jenkins Udemy jenkins
    SOURCE: Udemy jenkins Udemy jenkins Udemy jenkins
    #tags#[replace: -,-Udemy jenkins] Udemy jenkins#tags#[/b]
    [b]Эдуард Кабринский[/b]
    [url=http://remmont.com]news headlines[/url]

    3 years ago Reply

Add a comment

DON’T MISS OUT!
Subscribe Today to Hotel n Apartment Special Offers and Benefits
Be the first to get latest updates and exclusive content straight to your email inbox.
Stay Updated
close-link