I’ve always developed and ran scripts locally through VS Code. I’m just getting started with Azure Automation and am not a fan of waiting for the job to complete before seeing my results. In fact, it’s very frustrating. I’d rather develop and test my script locally first before running it in Azure Automation.

I’m using a user-managed identity to run scripts against Exchange Online. VS Code has an Azure Automation plugin that provides an option to run script locally, but the script bombs out when attempting to use the user-managed identity, as the user-managed identity may only be run in Azure.

For those of you who use Azure Automation, I can’t imagine that you develop significant portions of the script and wait for automation jobs to complete each time to verify changes.

How do you develop locally? Do you use an app registration w/ client secret in key vault and call that from your local machine? Do you have a process for developing locally for scripts that specify managed identities?

Thanks everyone!

  • flambonkscious
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Still no responses? That’s kinda scary…

    I know this place is very FOSS-heavy, but as another guy that’s basically balls deep with Microsoft (I may have that around the wrong way) I’m surprised.

    Then again debugging is …complex - I can’t understand why you’d want an automation layer around it

    • Greg Tate@programming.devOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      My challenge is that I’m used to developing scripts locally with an account that has privileged access. The development time w/ debugging is fast, as the account has immediate access. With Azure Automation, I have to wait for a cloud job to complete, and sometimes this takes a minute or two. That’s too long for me to execute and wait for the results.

      I would rather develop locally using a privileged account and then push to Azure Automation when I’m confident that my script logic is executing as expected.

      I think I found a way around the issue. In my script logic I can test for the PowerShell profile path. In Azure Automation, the profile path references ‘ContainerUser’. When running locally my profile path references my local directory. If the profile path references ContainerUser then I can specify to use the user-managed identity; otherwise, I’ll use my interactive credentials, e.g. a PowerShell session that I have previously established locally with Exchange Online.

      • nyar@lemmy.world
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        Doesn’t sound like a way around the issue, sounds like that was the issue.

  • pwshguy (mdowst)@programming.devM
    link
    fedilink
    arrow-up
    2
    ·
    1 year ago

    Typically, when I have a script I need to test locally, I’ll comment out the identity connection command and just authenticate outside of my script. If I’m feeling real fancy, I’ll write a try/catch to attempt to authenticate first as the managed identity then if it fails prompt me for credentials. Not the most elegant solution, but it works.

    try {
        Add-AzAccount -Identity -SubscriptionId $SubscriptionId -ErrorAction Stop | Out-Null
    }    
    catch {
        Add-AzAccount -SubscriptionId $SubscriptionId
    }
    
    • Greg Tate@programming.devOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      Makes sense. I found an environment variable that detects whether the process is running in Azure Automation, i.e. it’s running in Azure Automation if the variable is defined:

      Get-ChildItem -Path env:AZUREPS_HOST_ENVIRONMENT
      

      This helped me provide some conditional control on when to use the managed identity and when to use my interactive credentials.

      All the while I’m figuring out that using the Azure Automation plugin with VS Code is only useful for publishing code in runbooks; the extension doesn’t provide an easy way to manage custom modules. And with the code I’m writing, I’m quickly finding that it won’t be efficient to include everything in runbook files. So I’m now heading down the path of using a pipeline to publish my custom module to Azure Automation, then calling that module with a lightweight runbook.

      Appreciate the guidance!

      • pwshguy (mdowst)@programming.devM
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Just a heads up, I received confirmation from the product team that the AZUREPS_HOST_ENVIRONMENT environment variable is going away. They are moving the backend to containers. Also, the COMPUTERNAME one that was always “client” is going to change too. The COMPUTERNAME will now be “Sandbox-###” with # being random numbers. I started using the code block below in my runbooks to find if they are running in Azure or hybrid worker/locally. It accounts for the current and the updates that will be rolling out in the near future.

        $isHybridWorker = $true
        if (($env:computername) -eq "CLIENT") {
            $isHybridWorker = $false
        }
        elseif ($env:USERNAME -eq 'ContainerAdministrator') {
            $isHybridWorker = $false
        }
        ``