One very close person to me (привет Вова!) told me on Friday (16th February 2024), that their MacBook has been stolen in Hamburk a day before. Luckily, it had the Find My and Offline Find (more about both later) enabled, and they managed to track it all the way to Berlin, into a 9 story hotel/hostel. They contacted the police, and the police managed to get into the hostel on Friday evening. They even obtained the details of the person who probably took the MacBook from Hamburk and brought it to Berlin (I will avoid discussing any personal information since that could just cause problems - it’s in Czech, use translator if you wanna get to know the story), they likely “searched” the room he stayed in but didn’t find anything.
Me and my friend headed to Berlin on Friday evening to search for it. Unfortunately, we didn’t find anything. But through Find My, the MacBook was still broadcasting its location via Offline Find.
Personally I believe that Apple’s Find My is one of the greatest inventions. You can easily track both online and offline devices, and in some cases (like AirTags or AirPods) even locate them via Find Nearby. Unfortunately, this doesn’t work for MacBooks.
The greatest feature of Find My is, that your device can be located even when it is offline, using Offline Find (OF). It works in a way, that when your device looses internet connection, it will start broadcasting it’s temporary public key (more on that later), which is then picked up by other devices which have OF enabled and they send their location along with the device information to Apple’s servers. This effectively makes Apple devices an amazing tracking network. And it’s also anonymous, and secure.
You can learn more about how OF works from Apple docs, but to shortly explain it (it works different for devices vs AirTags or trackable accessories, and I will just focus on the devices - MacBooks here): A device generates its private and public key on P224 curve, a shared secret, stores it as encrypted file in iCloud (more on that later as well), and from this initial key derives a new key-pair every 15 minutes and broadcasts it over Bluetooth Low Energy (BLE). This makes the devices impossible to track from longer periods of time, unless you know the private key and the shared secret.
The derived public key is then broadcasted every 2 seconds or so, so other devices can pick it up, encrypt their location information with the public key, and upload it to iCloud, so your other device can then open it (this works only for Find My in your devices, not on the web as far as I know). And not even Apple can read these information.
So far, everything about this sounds great, anonymously tracking devices, so you can see where they are, privacy preserved (there are some discussions about improving this, but whatever). The stupid thing is, that Apple gives you only limited information - like the last location. But they have the whole location history, so you could pull a map of where the device has been to, reconstruct robber’s moves, but this data is kept from you.
It is also very stupid, that since the device is broadcasting on BLE, there is no tool, to easily see that you are close to your device, or getting closer etc. Which would have helped us with the search in the hostel.
Going to the hostel, completely unprepared, I quickly went through how Find My works and downloaded some random BLE tracker app to my iPhone. I was hoping to see something useful there, but there were some iPhones around, Google Nest cameras (which the reception told us are not working anyways), some Androids, Bluetooth speakers and a lot of devices tagged as Unknown.
Going through the building both on Friday and Saturday, we returned back to Prague. On the way back, I realized that there has to be a way to track the device, and started looking into how Find My works much closer. I really regret not doing it beforehand, because we could’ve came in ready.
Thankfully there’s a lot of Open Source. I started with getting myself a MacBook from a colleague (thanks Martin!), and going through the OpenHaystack project, which is attempting to use Apple’s Find My network to enable use of custom made AirTags. This project provided me with a very important research paper Who Can Find My Devices? Security and Privacy of Apple’s Crowd-Sourced Bluetooth Location Tracking System which pretty much explains everything about this technology.
Unfortunately OpenHaystack is quite old, and doesn’t work on the latest MacOS because it relies on some hacks via a plugin in the Mail app (shame on you Apple, for not giving us a proper API for this at least!), and it pulls data from the Find My servers which is quite useless in the case, where we need to see how far we are from the device.
Luckily, there is a FindMy.py project, which does a lot of things for you (I am skipping the part, where I tried to figure a lot of this on my own). Basically, it uses bleak library to interact with Bluetooth stack on the device, perform the search and decode the returned packets.
Apple encodes part of the public key into the broadcasted MAC address, so it changes every 15 minutes, like mentioned above. See the paper to learn more about the packet structure.
One issue is, that when this code is run on MacBook, Apple will return UUID for the BT device instead of a MAC, so you have to specifically say, that you want a MAC address back - by passing cb=dict(use_bdaddr=True)
into BleakScanner
.
After getting myself a second MacBook for testing (thanks Karel!), I managed to discover it, verify that the keys are changing every 15 minutes and that I am able to track the device’s proximity based on signal strength (more on that later).
Now to the next challenge - finding the needle in the haystack. There are many MacBooks broadcasting things like this. Take a trip by metro and try the discovery, or at a dentist’s office, or at your work. Without knowing what we’re looking for, this would be just a bruteforce search which would be set for failure.
Like mentioned above, the initial keypair and shared secret, and also the pair time are stored in iCloud protected by a keychain stored password. Luckily, someone already managed to find a way to get the keychain password and decrypt the files, yay (there is another version which wasn’t working for me)!
Once we manage to get the keys required to generate the device’s current public key, we can move on. FindMy.py also includes a code to generate the public keys for a specific period of time, so we are going to use it. It is made for AirTags, which use two shared secrets, but with MacBook (and phones), we only have one. So modifying the code slightly, we obtain the list of all possible public keys for the specific device.
I also modified it to get the future keys, because keys of the past are useless for realtime search.
We then feed the public keys to the scanner, so that we can filter out the device we are looking for, and we have built a proximity sensor, which will alert us of the right device in the viscinity. We can then use the signal strength (RSSI) to see how far (approximately) it is. Find the right room or place and…
After this, I performed bunch of tests in the office building to make sure that I can get the lock on the signal, see how it behaves through floors, outside and so on (thanks Dominik!).
I also pulled the 7 day location history of the device from OF, so we built an location history map.
After the first failed attempt to search (without proper knowledge), we returned back to the hostel in Berlin. Unfortunely, the last OF ping was from Sunday 18th, and we we returned on Friday 23rd. The chances to find it were already low by then, due to the fact that the battery could’ve died, which would render all above useless. I remained optimistic, and from the last visit to the hotel, I was thinking that it was just nobody with an iPhone or iPad around to pick the OF beacon.
We arrived in late evening, started walking through the building and trying to scan for any signal. We discovered bunch of devices broadcasting OF beacon, but none of them matched the key of the one we were searching for. At that time, I already knew that the method above is not going to work, since if it’s not broadcasting, we cannot detect it.
Our last shot was calling the police and having them search the room again. Kudos to them, but unfortunately, the room search (I am however not very confident that they searched the right room) turned out nothing.
So unless someone finds the MacBook and brings it to the police or reception, it is lost forever.
We were too late. We couldn’t have retrieved this one (or maybe yes, but by breaking law and being very annoying by going door to door). But we could’ve if we had the knowledge beforehand. And that’s the thing I am going to focus on - a chance to help others to find their lost devices leveraging Find My and Offline Find.
Starting by putting together all the Python scripts we used during the search and putting together a guide to do the same on GitHub.
Next step is going to be to take the code, turn it into a JavaScript (or compile via WASM) and create a user-friendly application user interface where the user can simply input the beacon keys and it will search for the device and show the signal strength (or distance). It is all going to be free to use and open-source.
]]>Initially, I have started with cloud gaming as a demo for couple of conference talks in 2017. I have simply setup an NV6 (now deprecated) VM in Azure, installed VPN (which was required back then), Steam and Arma 3 and was able to play on ultra settings without much hassle. Anywhere in the world.
This has truly changed my view on gaming. I previously wrote about how I am using my Surface Pro X for remote work (connecting to a workstation in the office etc. and also mentioned some of the gaming bits there).
While there are many options to game through, I have found Steam Remote Play to be the best option for me. I will include a tutorial (more like links) for this here, so you can try it out yourself as well.
So what do you need? Start with an Azure account, and create a virtual machine of NV-series (currently NVv3). You can also choose NVv4 with AMD GPUs, but I prefer NVidia. You can also use spot instance to save money (with the risk of the machine being randomly shut down).
Next, you should choose your operating system. I am gaming on Windows Server 2022 at the moment, but you can also go with 2019 or Windows 10/11 images as well.
Once you log in to the virtual machine, you need to install the GPU drivers which you can get from Microsoft’s docs.
Next you will need to enable audio and install a virtual audio driver. The original code is from here (it is more advanced, and more automated setup, so feel free to use it, I just used this excerpt, because I didn’t want it to do autologon).
# Enable Audio Service
Write-Output "Enabling Audio Service"
Set-Service -Name "Audiosrv" -StartupType Automatic
Start-Service Audiosrv
# Install Virtual Audio Driver
[Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]::Tls12
$webClient = new-object System.Net.WebClient
$compressed_file = "VBCABLE_Driver_Pack43.zip"
$driver_folder = "VBCABLE_Driver_Pack43"
$driver_inf = "vbMmeCable64_win7.inf"
$hardward_id = "VBAudioVACWDM"
Write-Output "Downloading Virtual Audio Driver"
$webClient.DownloadFile("https://download.vb-audio.com/Download_CABLE/VBCABLE_Driver_Pack43.zip", "$PSScriptRoot\$compressed_file")
Unblock-File -Path "$PSScriptRoot\$compressed_file"
Write-Output "Extracting Virtual Audio Driver"
Expand-Archive "$PSScriptRoot\$compressed_file" -DestinationPath "$PSScriptRoot\$driver_folder" -Force
$wdk_installer = "wdksetup.exe"
$devcon = "C:\Program Files (x86)\Windows Kits\10\Tools\x64\devcon.exe"
Write-Output "Downloading Windows Development Kit installer"
$webClient.DownloadFile("http://go.microsoft.com/fwlink/p/?LinkId=526733", "$PSScriptRoot\$wdk_installer")
Write-Output "Downloading and installing Windows Development Kit"
Start-Process -FilePath "$PSScriptRoot\$wdk_installer" -ArgumentList "/S" -Wait
$cert = "vb_cert.cer"
$url = "https://github.com/ecalder6/azure-gaming/raw/master/$cert"
Write-Output "Downloading vb certificate from $url"
$webClient.DownloadFile($url, "$PSScriptRoot\$cert")
Write-Output "Importing vb certificate"
Import-Certificate -FilePath "$PSScriptRoot\$cert" -CertStoreLocation "cert:\LocalMachine\TrustedPublisher"
Write-Output "Installing virtual audio driver"
Start-Process -FilePath $devcon -ArgumentList "install", "$PSScriptRoot\$driver_folder\$driver_inf", $hardward_id -Wait
Next, you should create a shortcut on your desktop to disconnect the Remote Desktop session, and return it to the console. This is so that Steam streaming can capture the console screen and Stream it to you. The shortcut is simple (you can replace %SESSIONNAME
with 1
if it doesn’t work for you), just make sure to run it as Administrator.:
%windir%\System32\tscon.exe %SESSIONNAME% /dest:console
If you execute this on your local PC, it will disconnect the session and the user will not need to login (eg. everyone will see what you are doing on your screens, unless you at least turn them off) - this is a security issue, so please don’t do this on your work PC.
Next, download and install Steam, install your games and enjoy. Also remember that you can also Stream non-Steam games this way by adding them to your library. So you can play Battlefield 4 and others.
And you should be good to go. The performance and cost benefits are really worth it, and it’s much cheaper than buying a good gaming graphics card (or an entire PC) as long as you are not playing 24/7.
This is for more advanced users, and unless you face latency issue in streaming (quality, lag, …), you don’t need to do this.
One thing to note is streaming performance. By default, Steam Remote Play works behind NAT and without public IP address, and everything gets routed through Valve’s servers, which adds latency on the route. In order to do that, open ports on your VM - you have to do this in the Network Security Group and Windows Firewall as well. You will need 27031 - 27037 for UDP and TCP (more info). Once you configure this, restart your VM.
Some people have used ZeroTier to estabilish a VPN (sort of like Hamachi), but I prefer direct connection.
Another crucial part is to verify that the connection is direct. You can do this by using Wireshark on your local PC and filtering for the VM’s public IP address - you will see a lot of UDP traffic when you start streaming a game. If you however see the traffic going through Valve’s servers (this sometimes happens, for unknown reasons to me), you can force a direct connection from your Steam client:
steam.exe -console
commandconnect_remote <IP>:27036
(replace <IP>
with your VM’s public IP address)For some reason, the connect_remote
command seems to be necessary, but sometimes in the past, it worked for me without it. Maybe it’s a change on Steam’s side or something else, not sure. I also created a comment about the direct connection in the setup repo.
You can skip the few top paragraphs if you are not interested in the background.
After being in Azure VM with my site for 3 years, I decided to move it back to WEDOS (less maintenance required) - over time, they enabled Let’s Encrypt certificates for free, so I was no longer needing Cloudflare for that. Last week, they released a new blog post about increasing prices - ~US$0.45 raise per month, which was perfectly fine, however, to my surprise they also announced a “domain fee” of ~US$44 per month for not using their DNS servers while pointing to their hosting. This kind of enraged me. I understand the need for raising prices, but such a fee is completely unacceptable in my opinion.
To give a little context, WEDOS created a new service called WEDOS Global which appears to do compete with Cloudflare - DDOS protection, CDN, DNS and more in the future. And I suppose they are trying to push more people into the service so they can have higher MAU - I really hate such practices and prefer to have a choice what and how to configure my things - and one thing is for sure - I am sticking with Cloudflare for DNS. I e-mailed WEDOS, and got a reply from their CEO - Josef Grill, explaining that the reasoning behind it is DDOS attacks on their infrastructure and that with 3rd party DNS setup, they cannot fight the attacks efficiently. To that, I replied that while I am using my DNS pointing to their hosting, the records point to their hosting via their DNS via CNAME, so technically, their DNS is still being used - except for A record, which I am willing to happily host at Cloudflare Pages since it is just redirect to www. Unfortunately, I haven’t got a reply to that message, yet.
Also at NETWORG we are using WEDOS for a few of our customers and they would be affected the same way.
So I started scratching my head a little and looking at other options for hosting the site, in case they would force me to start paying the extra cost. I am always trying to aim for PaaS-like services where I don’t have to bother with maintenance too much, especially with legacy projects. So I got back to the idea having my own Docker runtime container with PHP, another container with MySQL and having it run together. During the past years I upgraded from PHP 7.4 to 8.1 (with only a couple of changes), so the container needed a bit of refreshing.
Since I am using .htaccess
, I decided to stick with Apache, as I don’t want to make any big breaking changes which would consume more of my time maintaining the projects. I initially tried to ugprade my original image to PHP 8.1, but it brought more and more errors (package deprecations, requirements and so on). So I decided to start from blank Dockerfile in hajekj/php-runtime. I love the way the App Service runtime containers work, so I took inspiration there.
I started with the familiar repo of App Service images. For some reason, Microsoft stopped using Apache for PHP 8 in App Service, and just switched everything to NGINX, without giving people much choice. Luckily, in the images, there is still an Apache image, and it is still being built and can be technically used, but probably without any support. So the starting point got much simpler.
The App Service image relies on Oryx which are the underlying builder images used in Kudu - turns out, they are also used for running the containers. The Oryx PHP image consists of couple of layers - the base dependencies, PHP + Apache, extension installation and runtime modifications for Apache. This is then further customized by App Service’s Image Builder later.
The Oryx base images are similar to the docker-library’s PHP. I didn’t want to depend on Microsoft’s internal packages, so I went with the community image as the base one. I then added all the extensions needed and also to match the App Service config. I did a couple of modifications in the config - like support for RemoteIPHeader
to resolve X-Forwarded-For
correctly from reverse proxies and some other configs done in Oryx and Image Builder. I removed dependency on Oryx’s startup script and replaced it with the pre-generated script.
After couple of attempts and toying with the dependency install and layers, I ended up with a decent PHP 8.1 image.
The next question was, where to host it when needed for cheap?
Short note on PHP-FPM
I managed to have the previous image of PHP 7.4 run in the PHP-FPM mode, which is much more efficient than spawning processes via Apache’s mod_php, which is even discouraged to use by Apache. So if I were to switch to the new hosting, I will probably upgrade the image to use FPM as well, for performance.
I could go with a custom VM again - either in Azure, DigitalOcean or others - but I wanted to just stop having to care about the underlying VM - Let’s Encrypt certificates, updates, security etc. My project is the standard LAMP stack - PHP + MySQL, so I needed a database as well. I could go with Azure Database for MySQL but the cost of the cheapest one is US$6.32 per month, which costs more than the entire WEDOS hosting (yes, I am trying to get the cheapest variant).
After some research, I discovered Fly.io which allows you to host Docker containers. They have quite a powerful free tier with up to 3x shared CPU and 256MB RAM machines, 3x 1GB persistent volume storage and 160GB bandwidth. I did some experiments there and just couldn’t get the container to run and serve content, and after half a day of trying to get it to work - I gave up. I am sure it is possible, but I just didn’t want to spend more time on it. I also got a little spooked about their shared volumes redundancy - they don’t seem to run in some high-availability or redundancy, which is quite crucial for things hosted this way (may it be the PHP code, or the database).
After that, I got to look at Azure Container Apps. I have been so focused on Functions, App Service and such offerings, that Container Apps kind of slipped through. Container Apps are the serverless way of hosting Docker containers - in a consumption plan, with a huge free tier - or at least enough for hobby apps and side projects.
You simply tell it to run your container, mount some volumes, in how many instances and you are good to go! You can then easily configure things like custom domains, eventually authentication via Easy Auth, and much more! So having my PHP container running there, with an Azure Files mounted volume for the code, logs, PHP sessions and such, I started lookin into the ways to run a database there.
You can also connect to the containers via a shell, which is quite handy for debugging and such. You can manage the storage by mounting the Azure File Share to your machine.
I require a standard MySQL compatible database to which I can connect either with mysqli
or pdo_mysql
. Container Apps offer a way to add add-ons to your containers, which basically means spinning up another container app and connecting it with your container - and you can spin up a MariaDB instance. Unfortunately - you don’t have any control over the scale, size or the storage. Microsoft says that the storage is persistent and should survive across restarts, but they don’t give any guarantees since it is to be used for development purporses only.
So spin up a MySQL as a container app, shall we? MySQL container is available and since I previously used it, I am going to stick with it. You simply pass in the MYSQL_ROOT_PASSWORD
environment variable, mount the Azure Files volume to /var/lib/mysql
for data persistence and you should be good to go, except the container won’t start and will keep crashing.
When you examine the logs, you will see a lot of errors like these:
[ERROR] [MY-012960] [InnoDB] Cannot create redo log files because data files are corrupt or the database was not shut down cleanly after creating the data files.
[ERROR] [MY-012930] [InnoDB] Plugin initialization aborted with error Generic error.
[ERROR] [MY-010334] [Server] Failed to initialize DD Storage Engine
[ERROR] [MY-010020] [Server] Data Dictionary initialization failed.
While it looks really scary, it simply means that the MySQL failed to initialize. Where is the problem? It’s the Azure Files mount permissions. The MySQL container runs as mysql
user, but the Azure Files mount is owned by root
user. The container cannot write to the mount, so it fails to run.
I managed to find some hints in Stack Overflow where they basically say, that you need to mount the File Share with a few specific parameters. Luckily, providing custom mountOptions
is already supported (yet not exposed in the UI) in Container Apps.
I provided the following configuration to mountOptions
:
"mountOptions": "dir_mode=0777,file_mode=0777,uid=999,gid=999,mfsymlinks,cache=strict,nobrl"
Which resulted in the container to boot and run the database. Also verified by connecting to it via Adminer. Attempting to test the data persistence, I have hit another issue - when restarting the container, or deploying a new revision, the container won’t start. In logs, you will end up with something like this:
[ERROR] [MY-012592] [InnoDB] Operating system error number 2 in a file operation.
[ERROR] [MY-012593] [InnoDB] The error means the system cannot find the path specified.
[ERROR] [MY-012594] [InnoDB] If you are installing InnoDB, remember that you must create directories yourself, InnoDB does not create them.
[ERROR] [MY-012646] [InnoDB] File ./ibtmp1: 'create' returned OS error 71. Cannot continue operation
This means that the files are locked and used by another container. This is because when deploying a revision, Microsoft only shuts the previous container down once the new one has booted correctly, which doesn’t happen. The solution is to switch the revision mode to multiple, where you de-activate the previous revision, activate the new one and it will run just fine. If you restart manually, the platforms also spins up a container side-by-side and then shuts down the previous one, if the new one starts correctly, which never happens, so you have to create a new revision and repeat the step with de-activating the previous one. Quite complicated, but does the job quite well.
Thinking about backups - there are a few options - I never rely on just provider’s backups because when the provider looses them, you are … So what I do is that I run a GitHub action which backs up the entire storage to a private GitHub (or Azure DevOps) repo every night, along with a plaintext SQL export of every database. The site content barely changes and is just PHP files. The databases grows at a steady right, so nothing too critical either.
With Azure Files, you can run Azure Backup on top of the shares and create snapshots for resiliency, but with my previous database experience, I would still recommend exporting the database so in-case it all burns down, you can easily restore it somewhere else and keep going.
Whether this is the next way of hosting - I don’t know, at least not yet, since I haven’t received any official notice from WEDOS regarding the newly introduced monthly fee for not using their DNS. But I am ready to move the project within a few hours at any time, and run it for the same, or maybe even cheaper price.
]]>In our company, we have multiple needs - reach our office networks for two cases - RDP into our workstations for remote work, and access customer’s services which have our office IP address whitelisted (I will not discuss this practice, but it’s what it is). Second need is to be able to access customer’s network. We have multiple customers, and each of them has their own VPN solution, many of which are incompatible with one another, and the VPN software is sometimes really hard to deal with (obscure drivers, networking configuration etc.). Some of us, and other companies make use of jump-boxes, basically a shared, dedicated virtual machine, which contains the VPN software to connect so it doesn’t interfere with the network.
One of the needs is also to be able to properly authenticate and authorize the user accessing the VPN (based on group membership, so it works with JIT), audit the access and be able to revoke it. Ideally from a single place - Azure AD.
How we approach customer networks is simple - we usually get a virtual machine in the network, and install the SoftEther bridge which we then connect to our VPN server as a hub (so no need for exposing ports on customer side), and then configure authorization. The authentication in SoftEther is handled via our Radius365 service (FreeRADIUS deployment with custom API handling the incoming attributes and returning the user and password), which allows us to authenticate with user principal name, configure a separate VPN password, and handles authorization based on group memberships. The service itself is a great thing, but it has certain security downsides - like storing passwords in plaintext, or to be more specific reversibly encrypted, due to the way RADIUS as a protocol works.
In special cases, we have our customers access their network via our VPN server as well, Radius365 supports B2B identities (in Cloudflare One, we are using B2B access in Azure AD).
This has been working reliably for years.
We started evaluating Cloudflare One about half a year ago. Previously, we played with it when it got released, but never got to deploy it.
Cloudflare One is free for 50 users, which is amazing for small companies. In its free tier, it has almost all features, except for long-term log retention (24 hours only) and community support.
We started with configuring the authentication - connecting to Azure AD. This is super easy - simply create an app registration, configure redirect URL and create a secret. From here, you can also configure conditional access in AAD and move on.
To my surprise, Cloudflare One also supports SCIM group provisioning and support for user-deprovisioning. You can limit the session length and effectively force re-authentication of users. Also revoke user’s session based on their Azure AD status or group membership change. This is really awesome for added security (in Radius365, this was handled in a similar way).
Connecting networks is easy. You start by installing Cloudflare Tunnel (cloudflared) on a server in the network (you can also install multiple for high-availability) and provisioning it in the portal. Then you create a virtual network and pointing the tunnel’s address space to it.
You can either point a specific subnet, like 10.0.0.0/16
or 192.168.0.0/24
or have all the traffic routed through the tunnel via using 0.0.0.0/0
- this is what we use to connect to our office, to end up with the correct public IP address.
Next, you definitely want to configure authorization - specify who can access resources in which tunnel. This is where things got slightly confusing for me. I would expect that you configure the authorization on a virtual network level, but you have to configure the permissions in the gateway’s network policies (thanks cscharff for pointing me in the right direction).
So we create a block policy for each virtual network, which is like - _reject all access if network is
In our office, we are using Turris Omnia as a router. This router can run LXC containers, so we simply setup the armhf
version of cloudflared in the Ubuntu container like above and we are good to connect to the office.
Once connecting, we can RDP to the workstation and work remotely (we plan to move to the native RDP support in Cloudflare in future). However, the issue is, that when the machine you are remoted in, connects to a virtual network on its own (to access customer environment), the RDP connection drops!
This is caused by split tunneling configuration, where we removed the split tunnel configuration for some of the local ranges like 192.168.*
and 10.*
. However, when Cloudflare WARP connects, it overrides the routing table of the computer, which locks you out of the RDP session.
Luckily, there’s a solution for it - currently in beta - managed networks and device profiles. Managed networks simply identify where the device is, by attempting to open a TLS session to a specific IP:Port combination and comparing the fingerprint of the presented certificate. This way, we can setup the split tunnel for devices sitting in the office and let the RDP session continue even when they connect to a tunnel.
The above worked great, until I tried to connect via my iPhone. The iPhone just refused to connect to the office network and kept reconneting. I have analysed the logs of iPhone, and found out, that the device does managed network discovery, connects to the office network (from LTE for example), and since the network changes, and does managed network discovery again - since the device is present in the network, it decides that the location has changed and reconnects again. And this keeps repeating.
After a few days, I finally found a working solution for this. It involves blocking the managed network discovery endpoint, when the request is sent from within the container running cloudflared. This could be done by configuring a firewall rule (which had no effect for me in Turris), so I installed a HAProxy and set the following configuration:
global
maxconn 32000
ulimit-n 65535
uid 0
gid 0
daemon
nosplice
defaults
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
listen my_http_proxy
bind :4434 ssl crt /root/cloudflared/example.pem
# Block managed network discovery from cloudflared
tcp-request connection reject if { src 10.183.0.3/32 }
mode http
http-request return status 200 content-type "text/plain" lf-string "Hello cloudflared!"
This allows iPhones (it doesn’t happen on Windows, Android, Linux or Mac) to connect to the office network, and office workstations to work as expected. The only limitation is that you are not able to connect to customer networks from iPhone when on office Wi-Fi, but generally the iOS access is not a high priority, and is usually used in emergency situations when without a computer.
Besides Cloudflare, we were considering Microsoft Entra Global Secure Access, but the remote network connection is in private preview at the moment. It is also limited to Windows only clients, which is a big downside and the moment (also doesn’t support BYOD Windows scenarios). One more solution which we evaluated was Shieldoo, which was created by an ex-Microsoftie. It is opensource, but doesn’t meet our needs like a full remote network access.
]]>Before I go any further, false-positive, in the context of this article means that the message ended up in Junk mail folder, because the sender wasn’t on my whitelist (either in contacts or safe-senders list) due to the settings I previously configured. So we are trying to identify the legit messages and senders which should be on the safe-list, but aren’t there yet.
Since the last post, my inbox got 0 spam messages, everything I would consider junk ended up in the Junk folder. However, some of the messages ended up in the junk folder because I don’t have the sender in my contacts or safe-senders list, and I don’t want to check my Junk mail folder every day. Considering Microsoft (and almost everyone in the tech sector) is crazy about AI and putting it everywhere at the expense of other things (pun intended), I thought, let’s try to use OpenAI - GPT-4 model and see if it can further help out with Junk classification.
If you don’t have access to Azure OpenAI yet, you can request it here. Alternatively, you can make use of OpenAI API, which doesn’t require any specific access.
You can also achieve the same thing in Power Automate, however you will need a Premium license which is unavailable for consumer accounts so you won’t be able to call the OpenAI API. I suppose there could be some way around that, but I am not going to go into it in this article.
You can start by creating a Logic App, you can go with Consumption which will be much cheaper in this case or also create a Standard one.
Once the Logic App is created, you have to choose the trigger. You can either go with a scheduled trigger which can run say.. every 15 minutes or When a new email arrives trigger. I went with the first one, because I prefer to process multiple messages at once and having the possibility to throttle the run in case the mailbox is flooded with spam (this is important for cost control).
Next, we retrieve all undread emails via Get emails action from the Junk folder. To prevent message flood, we only retrieve top 10 messages (set is higher or lower as needed).
Then, we will iterate through each email we retrieved in the action above via For each and assemble the request. I used multiple Compose actions to assemble the body of the email and the request. The body has to be shortened to match the maximum amount of tokens which you can send to the API. In my case, I chose the 32k model which means that I need to shorten the message to 32,000 characters. This is done via the following expression:
substring(
string(item()?['Body']),
0,
if(
greater(
length(
string(item()?['Body'])
),
32000
),
32000,
length(string(item()?['Body']))
)
)
Next, we assemble the email. I chose to provide the subject, body and sender address:
{
"body": "@{outputs('EmailBody')}",
"from": "@{items('ForEach-Email')?['From']}",
"subject": "@{items('ForEach-Email')?['Subject']}"
}
Next, we need to send the request to the Azure OpenAI API. This is a POST request to https://<your-instance>.openai.azure.com/openai/deployments/<your-deployment>/chat/completions?api-version=2023-07-01-preview
with a header Api-Key
containing the value of your key. The body will be the following:
{
"frequency_penalty": 0,
"max_tokens": 10,
"messages": [
{
"content": "You are e-mail spam filter. You will receive sender of a message, its subject and body. Based on this, you will reply with just a number, which corresponds to the confidence of the message being spam.",
"role": "system"
},
{
"content": "@{outputs('Email')}",
"role": "user"
}
],
"presence_penalty": 0,
"stream": false,
"temperature": 0.7,
"top_p": 0.95
}
You can of course play with the system prompt and other variables and set it to whatever fits you the best. You can also provide it with examples and so on.
As a response, you will receive something like this:
{
"id": "chatcmpl-88SYGSryVZQPr94CXFNo6Iu47qNKh",
"object": "chat.completion",
"created": 1697027068,
"model": "gpt-4-32k",
"prompt_filter_results": [
{
"prompt_index": 0,
"content_filter_results": {...}
}
],
"choices": [
{
"index": 0,
"finish_reason": "stop",
"message": {
"role": "assistant",
"content": "100"
},
"content_filter_results": {...}
}
],
"usage": {
"completion_tokens": 1,
"prompt_tokens": 6303,
"total_tokens": 6304
}
}
And the result will be the content
value which should contain a number between 0 and 100. Since choices
is an array, you should pick the first element of it (you can use first()
in expressions) and then get the value easily.
Based on the value retrieved above, you can decide what to do with the message - either store the summary somewhere and send yourself a daily digest of low confidence messages, move the message with low spam confidence to inbox etc. Up to you. Just make sure to mark the message as read when you are done, so it won’t be processed again.
So far, this has processed 57 messages since yesterday in my Junk e-mail, and out of those 57 messages, OpenAI marked:
So it’s not exactly perfect, but it can handle the basic filtering (which Microsoft kind of fails in right now) and save some of my time. Right now, I am running this in monitor mode, eg. it reports all processed messages to an Excel spreadsheet in OneDrive, and based on the results in a week or two, I am going to adjust the system prompt and eventually provide some examples to make the classification better. Based on that, I might have this “AI” decide about which mails from my Junk folder should go back to my inbox and eventually manage my safe senders list in the future. Have fun!
]]>There are plenty people complaining about this situation on Reddit (1, 2, 3, 4, 5, 6), Microsoft Answers (1, just Google for more yourself) and many more. The answers from Microsoft are really “helpful”. I had a bunch of tickets open with Microsoft Support (#1056895825, #1056728569, …), all of which resulted in the case being closed and being told “it’s a known issue and we are working on it” and also “keep reporting the messages”. Well, it’s been 6+ months and nothing has changed (perhaps stop pushing Copilot everywhere and put more people on this issue?).
Recently, I complained on Twitter and got another very helpful response - most of this doesn’t even apply to Outlook.com service.
…and why it doesn’t happen with business Microsoft 365 accounts? Well, first of all, M365 for companies is using Exchange Online Protection (EOP) which seems to deal with spam really well, while M365 for consumer is using, I guess, Outlook Live Protection (OLP) which seem to be two very different systems. I suppose the issue with using EOP for consumer accounts would be, that it wasn’t built for filtering spam of milions of user accounts and that consumer protections are not interoperable with business ones.
I have been trying to figure our why the e-mails bypass the filter and so far, I managed to find out the following:
Most e-mails which bypass the rules are sent from compromised accounts and domains (just ban them from the system, use ZAP on all send messages within last X minutes and wait for their admin to reach out?).
Those messages have valid SPF and DKIM (a lot of legit e-mail server admins are behind with this, Google will require it soon anyways).
And the content is just remote pictures linking to websites. When you click the link, the Safelinks feature should prevent you from continuing since it is usually phishing or scam, but it just redirects you. Seems like the verification and checks on the links could use some care too. Especially when e-mail and its content has been reported multiple times as spam by various users.
Besides waiting for Microsoft to do something about this, which I am not really sure whether or when will it happen, there are few things you can configure yourself.
The things which worked for me are following:
action=SetMailboxJunkEmailConfiguration
, right click it and select Edit and resendContactsTrusted
and set its value to true
This will effectively allow only whitelisted senders (contacts and safe list members) to appear in your inbox. It is not the ideal solution, since you should at least once a week check your junk folder for any false positives and if so, add them to your contacts or safe list, but it’s a much better solution than having to deal with loads of annoying spam messages.
I would like to conclude this article with one statement: I love Outlook.com, I am a paying user, but the level of support and communication Microsoft is providing regarding this issue is completely unacceptable. The status page says nothing is wrong, there is no support article publicly available regarding the spam issue and the biggest issue is that standard, non-technical users, who are much more vulnerable to falling for phishing are facing this as well. I am sure that Microsoft is aware and working to resolve this as fast as they can, but they should at least communicate this to their users like they do with other services.
]]>Continues with part 2 where we use Logic Apps and OpenAI to classify the Junk mail and detect legit messages.
In order to enable SCIM in Bitwarden, you just need an Enterprise subscription (in case you are on an older Enterprise plan, like we were, just contact their support and have the plan upgraded). Then you configure provisioning in Entra ID and you should be good to go to start assigning groups and users to the Enterprise Application, however it is not so simple.
Our setup is following:
We have a lot of users in our tenant and we drive all permissions through group memberships (either security or M365 Groups). Users, which are entitled to access Bitwarden are members of a License: Bitwarden
group. The intersection of respective groups and the license group is then used to provision users and configure memberships. This works very nicely with the Directory Connector. However with SCIM it won’t work this way.
Whenever you assign a group of users to an application in Entra ID, SCIM is going to apply scoping rules and then provision those users. The scoping rules however don’t support checking membership in a group. This means that the membership requirement scope won’t apply.
This resulted in literally all users from our tenant being provisioned in Bitwarden. This would have been a billing disaster (not to mention the explaining we would have to do to all the guests who would have recieved the invite e-mail), but luckily, we had the seat limit configured. So only about 12 unintended users were provisioned. We removed them right after it happened.
So since we can’t scope users on group membership directly, what can we do about this? We can use directory extensions in Microsoft Graph to store the entitlement information and then create a scope to filter that property.
Assuming that you have registered the application for SCIM provisioning in your tenant (docs), you can then go ahead and create the extension property via the following PowerShell script:
$applicationId = "" # Application ID of the Bitwarden SCIM application
$params = @{
name = "BitwardenLicense"
dataType = "Boolean"
targetObjects = @(
"User"
)
}
New-MgApplicationExtensionProperty -ApplicationId $applicationId -BodyParameter $params
This script uses the new Microsoft Graph PowerShell SDK. You can also use it from Azure Cloud Shell.
The output is going to contain the property name like extension_78b7ed6e43374407b6d7e376242bc31a_BitwardenLicense
. This is what we are going to use in our scoping filter. If you don’t see the attribute in the dropdown, you have to add it manually to the schema. Simply open the Azure Portal with this link navigate to the provisioning rules, and under Advanced you will be able to see Edit attribute list for Azure Active Directory where you will be able to add your newly created extension attribute.
The rest is just a matter of creating a rule with IS TRUE
operator and you are done.
TIP: Before you configure the SCIM sync, it may be better to change the mapping of
enternalId
property. Bitwarden’s docs suggest to map it tomailNickname
property, but as we all know, it is not immutable and administrator can change this value. Therefor, it is better to mapobjectId
toexternalId
, sinceobjectId
never changes for the directory object.
Once we have this filter done, we should also automate the assignment of the extension attribute, so any user which is added to the group will be provisioned to Bitwarden and the users removed will be deprovisioned. We are going to leverage PowerShell for this again in combination with Azure Functions.
Start with creating a new PowerShell based Azure Function. We are using Consumption tier for this, since it will be running for free.
Once you create the Function app, navigate to App Files, open requirements.psd1
and add the Microsoft.Graph
module, so the file will look like this:
# This file enables modules to be automatically managed by the Functions service.
# See https://aka.ms/functionsmanageddependency for additional information.
#
@{
# For latest supported version, go to 'https://www.powershellgallery.com/packages/Az'.
# To use the Az module in your function app, please uncomment the line below.
# 'Az' = '10.*'
'Microsoft.Graph' = '2.*'
}
Next, go to profile.ps1
file and comment out the lines which are there (or make the file empty). This is due to the fact that when you enable Managed Identity within Functions, it automatically initialize the Az
module for you, however, we are not including it in. This would result in an error.
Now, you need to enable the Managed Identity. You can also do this via your own App Registration, but I find MSI much easier and more secure.
Once you enable MSI, you need to grant the access to Microsoft Graph. You are not able to do this in UI, so PowerShell to the rescue:
$MSI = Get-AzureADServicePrincipal -ObjectId "<your_msi_object_id>"
$GraphAppId = "00000003-0000-0000-c000-000000000000"
$PermissionName = "Directory.Read.All" # Do this also for User.ReadWrite.All and AppRoleAssignment.ReadWrite.All, the permissions will be explained later
$GraphServicePrincipal = Get-AzureADServicePrincipal -Filter "appId eq '$GraphAppId'"
$AppRole = $GraphServicePrincipal.AppRoles | Where-Object { $_.Value -eq $PermissionName -and $_.AllowedMemberTypes -contains "Application" }
New-AzureAdServiceAppRoleAssignment -ObjectId $MSI.ObjectId -PrincipalId $MSI.ObjectId -ResourceId $GraphServicePrincipal.ObjectId -Id $AppRole.Id
Now you can create the timer triggered function. You can go with the default 5 minute interval, however if you have many users in your tenant, you may want to have a longer interval. Next we use the following code in the Function:
Connect-MgGraph -Identity -NoWelcome
$tobelicensedUsers = Get-MgGroupMember -GroupId "<your_license_group_id>" -All | Foreach-Object { ,$_.Id }
$bitwardenEnabledUsers = Get-MgUser -Filter "<your_extension_property_name> eq true" -All | Foreach-Object { ,$_.Id }
$remove = Compare-Object $tobelicensedUsers $bitwardenEnabledUsers | Where-Object { $_.SideIndicator -eq '=>' } | Foreach-Object { $_.InputObject }
$add = Compare-Object $tobelicensedUsers $bitwardenEnabledUsers | Where-Object { $_.SideIndicator -eq '<=' } | Foreach-Object { $_.InputObject }
$add | Foreach-Object `
{
$json = '{ "<your_extension_property_name>": true }'
Invoke-MgGraphRequest -Method PATCH "https://graph.microsoft.com/v1.0/users/$($_)" -Body $json -Debug
}
$remove | Foreach-Object `
{
$json = '{ "<your_extension_property_name>": null }'
Invoke-MgGraphRequest -Method PATCH "https://graph.microsoft.com/v1.0/users/$($_)" -Body $json
}
This will make sure that all users who have the property and are no longer in the group will be de-provisioned and those who are in the group and don’t have the property will be provisioned. Note that we are using Invoke-MgGraphRequest
since there is no cmdlet for updating the extension property yet.
The last step is to automate group assignment to the Enterprise Application so that the groups and their entitled users will be provisioned automatically into Bitwarden, this can be done via the following PowerShell:
Connect-MgGraph -Identity -NoWelcome
$groups = Get-MgGroup -All
# Our groups have a strict naming convention, so we can filter those to be provisioned easily
$pctGroups = $groups | Where-Object { $_.DisplayName -like "PCT*" }
$agtGroups = $groups | Where-Object { $_.DisplayName -like "AGT*" }
$intGroups = $groups | Where-Object { $_.DisplayName -like "INT*" }
$pstGroups = $groups | Where-Object { $_.DisplayName -like "PST*" }
$filteredGroups = $pctGroups + $agtGroups + $intGroups + $pstGroups
$existingAssignments = Get-MgServicePrincipalAppRoleAssignedTo -ServicePrincipalId <your_scim_app_id> -All | Where-Object { $_.PrincipalType -eq 'Group' } | Foreach-Object { ,$_.PrincipalId }
$toAssign = $filteredGroups | Where { $existingAssignments -notcontains $_.Id }
$toAssign | ForEach-Object { New-MgServicePrincipalAppRoleAssignment -ServicePrincipalId <your_scim_app_id> -ResourceId <your_scim_app_id> -PrincipalId $_.Id -AppRoleId <role_id_from_scim_app> }
You can either put it all into a single function or create multiple functions and have them run independently.
This is how we worked around Entra ID’s provisioning limits with SCIM and moved to Bitwarden’s SCIM protocol instead of running the directory connector.
]]>Along with the language repo itself, Microsoft also has a repo with samples, where one of those samples is an interactive playground powered by ASP.NET Core. The experience is also hosted and available directly from your browser, so you don’t have to run it.
Over the past few years, Microsoft implemented support for WebAssembly in ASP.NET and it is still continuously getting a lot of attention. Thanks to this, I decided to try to port the Power FX sample to run directly in the browser, without any need of server-side.
The main point of this was to be able to evaluate Power FX directly from JavaScript (an existing React application in my case). Blazor support two-way interop with JavaScript - calling JavaScript from .NET, and calling .NET from JavaScript. We will use the last option.
Once Blazor loads, you can execute DotNet.invokeMethodAsync
and DotNet.invokeMethod
from your JavaScript, passing the the required parameters - assembly name, method ID and arguments.
Everything which I implemented was done in Program.cs
, simply methods annotated with JSInvokable
attribute. The methods are almost the same as the ones provided in the host sample.
Once compiled, the application outputs wwwroot
folder with _framework
folder in it, which contains all the necessary code. Because I was testing things locally, and didn’t want to complicate the build process, I simply launched a http-server
from the wwwroot
folder on http://localhost:7080
. Next, I had to customize the React part of the sample. Luckily, the sample is very simple - it is a basic create-react-app scaffold with custom page, which displays the interactive formula bar and some debug information.
In the original sample, the communication is done via HTTP calls executed via async fetch
method. Replacing it was really simple. Instead of the original:
const result = await sendDataAsync('eval', JSON.stringify({ context, expression }));
The code now calls the following:
const result = await DotNet.invokeMethodAsync<string>("PowerFxWasm", "EvaluateAsync", context, expression);
But before I could call this method, I had to load the Blazor code into the existing page (this could have been done nicer, and will probably change as I move forward with this). Because I was hosting the script locally on a different host (React app runs on port 3000), the scripts and required resources weren’t loading due to CORS errors. The first fix was to run the http-server
with --cors
parameter. Because it was running technically on a different host, I had to also modify the boot of the Blazor app, to provide it with correct hostnames. Thanks to Github issue which was concerning something similar, I managed to get it up and running quite fast:
const script = document.createElement('script');
script.type = 'text/javascript';
script.src = `${process.env.REACT_APP_PFX_WASM_HOST}/_framework/blazor.webassembly.js`;
script.setAttribute("autostart", "false");
script.crossOrigin = "anonymous";
script.onload = async () => {
await Blazor.start({
loadBootResource: function (type, name, defaultUri, integrity) {
console.log(`Loading: '${type}', '${name}', '${defaultUri}', '${integrity}'`);
switch (type) {
case 'dotnetjs':
return `${process.env.REACT_APP_PFX_WASM_HOST}/_framework/${name}`;
default:
return fetch(`${process.env.REACT_APP_PFX_WASM_HOST}/_framework/${name}`, {
credentials: 'omit'
});
}
}
});
ReactDOM.render(
<BrowserRouter basename={baseUrl}>
<PowerFxDemoPage />
</BrowserRouter>,
rootElement);
};
document.body.appendChild(script);
I created an environment variable called REACT_APP_PFX_WASM_HOST
which holds the current hostname, which is then used to load the resources correctly. To avoid CORS issues, we are omitting the credentials from the requests. Same goes for initial script append. If you are on the same host, you don’t need to do this.
The most important thing however is to await the Blazor.start
call, so that we continue only once the application is running (more on that here). The only bad thing is that the types are not fully available at the time of writing, so you may need to go with @ts-ignore
in your code. If you don’t await the start, you will likely end up with No .NET call dispatcher has been set
exception.
Once I made all the changes, I was able to run the code, which is now available on Github. And since it runs in the browser without any backend, it is available on GitHub Pages right from your browser (feel free to check with F12 that no API requests are made 😉).
So what next? Right now, this is just a simple Proof of Concept to see if it runs and to compare the performance. There are a few things which need to be configured - like tree shaking, since the total size of downloaded resources is 20MB, which is really excessive. Once that is handled, I would like to publish this to a CDN as a library, so that anyone can reference it from their code and use Power FX right away. Eventually with some wrappers to simplify the loading process.
]]>Cloudflare Workers is essentially a distributed serverless platform which runs your code in their edge locations around the world. Thanks to it, the latency is super low (I am currently at 5 ms). Cloudflare Workers support multiple languages - from JavaScript to anything which can be compiled to WebAssembly. I chose to build this one with TypeScript, but you can choose from bunch of others.
Because I can! And I really wanted to try it out from quite a long time. I have used it few times for very simple things like hosting microsoft-identity-association.json
file (more about that here), but never really used it for any API or anything. Also Troy Hunt wrote a really awesome article about using Workers for his Have I Been Pwned service. So thought I would give it a shot. And what would be the purpose of the article if I didn’t include some Azure stuff in as well! Obviously, Azure AD authentication was my first pick!
For full details, visit How Workers work on Cloudflare’s docs.
Cloudflare is using V8 engine for executing your code - it’s the engine which runs JavaScript in your browser (assuming you are using Chroimum-based one like Microsoft Edge).
The important thing to remember about it, is that it isn’t Node.js. You don’t have the full extent of Node.js core modules available - like buffer
, os
, … This creates the need to use polyfills which somewhat emulate the functionality. Personally I found modules with dependency on process
not to work at all.
That’s one of the reason why you can’t just go ahead with passport-azure-ad module or MSAL for Node.js.
I decided to go with building my own sample because the samples which I could find didn’t really handle the API flow - eg. validating the Bearer token. The official tutorial for Auth0 is a regular authorization code flow which is not really any useful for an API (there is also an example of modded Auth0 to work with AAD).
The first thing we need to do when we receive a new request, is to validate the token. I found a handy library called @cfworker/jwt which has native support for Cloudflare Workers. However I had to make a few changes to it.
First the library expect to find the JSON Web Key Set (JWKS, the set of public keys which you validate the token’s signature with) at a fixed URL at <authority>/.well-known/jwks.json
. This is an issue with Azure AD, since JWKS is not available on that URL. In fact, you should obtain the JWKS URL from the <authority>/.well-known/openid-configuration
endpoint under jwks_uri
property.
So I implemented getOidcMetadata
method which accepts the issuer value from the received token and does a lookup to the OpenID Connect metadata endpoint. From there, jwks_uri
is taken and lookup to proper JWKS endpoint is done.
Loading JWKS and OIDC metadata should be properly cached. In the sample, the JWKS are cached in-memory. You can use Worker’s cache to cache for longer periods of time, more on that later.
The entire validation logic is then hidden in isTokenValid
method. Mind the TODO
part. You should always validate the issuer of the token, so you know which authority (tenant) has it been issued from (this is how you setup multi-tenancy by the way). Last modification I had to make was to the token validation within @cfworker/jwt
library. I took out the issuer validation and moved it to isTokenValid
so you would be able to get multi-tenant support.
I tested it with both V1 and V2 tokens and also B2C tokens. It also works with a token issued to a service principal (eg. via
client_credentials
flow). So you should be good to use it in most scenarios.
So now you can validate tokens on the edge!
So since we now have a valid token, let’s not stop there. We can use my favorite on-behalf-of (OBO) flow to retrieve a token for Microsoft Graph API and retrieve data from it!
Because you can’t run MSAL in Cloudflare Workers (upload was crashing for me, even with all the polyfills added in Webpack), I implemented my own simple method called getOboTokenCached
. It requires the tenant ID (you can get it from the token), the token (from Authorization
header) and the event context (so we can access Cache). Calling the OBO endpoint with information above is quite simple, but it adds about 400 ms to each request. Also, why request a new token every time you receive a request? In MSAL, the Token Cache handles things for you. Here, you have to deal with it yourself. The easiest way for me was to use the built-in cache. However, you can’t just pass it the request and response, the cache API (since it is the same one as in browser) allows you to cache only GET requests.
So the first thing we have to do is create a proper cache key, which is an equivalent of a GET request. In order to do it, we use the token endpoint’s URL https://login.microsoftonline.com/${tenant}/oauth2/v2.0/token
and suffix it with SHA256 hash of the request’s body (contaning the token, client ID and secret). Thanks to this, we get a unique key for the combination above. If client’s token changes (due to refresh for example), we bypass the cache and retrieve a new token. Besides, we cache the token response only for 60 seconds (you would use more in production, depending on your token’s lifetime). The result is, that you make the request once every 60 seconds.
The last step is to call the Graph API with the received token and return back the response (or do whatever you need to do with it).
Fully working sample: https://github.com/hajekj/cloudflare-workers-aad
Application ID URI
set along with some default scope you plan to use for OBO flow.OBO_CLIENT_ID
and OBO_CLIENT_SECRET
to your application’s values (the secret can be set as an encrypted value in the Worker’s dashboard)https://xxxx.yyyy.workers.dev/graph/me
to try out the OBO flow, any other path will result in just token validation and claims being output