This was a way more complicated process than I ever wanted it to be; I went down so many rabbit holes, I just hope I can stop myself or anyone else from doing this in the future.
Turn off Core Memory Integrity on your host as it conflicts with being able to turn on this option.
Go to Windows Security > Device Security > Core Isolation details, and toggle Memory Integrity off, as it conflicts with nested virtualization.
In the VM client, run this command in an elevated command prompt
bcdedit /set hypervisorlaunchtype off
Now you can turn on "Enabled Nested VT-x/AMD-V" in your settings of your VM client.
With luck the VM will start up. If not, remove all the VM's Hyper-V/Windows Features. Disable these in "Turn Windows features on or off" and restart.
Thanks to ChatGPT and a lot of trial and error I got this working...
1) Confirm you have an NVIDIA GPU
Press Win key, type Device Manager, open it.
Expand Display adapters → you should see NVIDIA … (e.g., “NVIDIA GeForce …”).
If you don’t have an NVIDIA GPU, you can still use WhisperX on CPU, but this guide is for CUDA (GPU).
2) Install NVIDIA GPU driver (latest)
Go to NVIDIA GeForce / Studio drivers (or your OEM site) and install the latest driver.
After install, restart Windows.
Current PyTorch GPU wheels work with modern drivers and are backward-compatible with their CUDA runtime (you do not need matching full toolkits for the PyTorch wheels themselves).
3) Install the CUDA Toolkit (Toolkit 12.8 as WhisperX asks)
WhisperX’s README explicitly tells Windows users to install the CUDA Toolkit 12.8before WhisperX if you want GPU acceleration.
Download CUDA Toolkit 12.8 for Windows from NVIDIA’s CUDA downloads page.
Run the installer → accept defaults → finish.
Restart Windows.
If you want the official step-by-step install reference for Windows CUDA: NVIDIA’s CUDA Installation Guide for Microsoft Windows.
4) Install Python (64-bit) – recommended 3.10 or 3.11
Go to python.org → Downloads → Windows.
Download Python 3.11.x (64-bit).
Run installer → check “Add Python to PATH” → choose Install Now → finish.
WhisperX supports recent Python versions; its repo includes a .python-version and standard wheels support 3.8–3.11. (We’ll use 3.11 for broad library compatibility.)
5) Install FFmpeg (required by Whisper/WhisperX)
Option A (manual, no package managers):
Download a Windows FFmpeg build (e.g., from ffmpeg.org or a reputable mirror), unzip to C:\FFmpeg\.
Add C:\FFmpeg\bin to your PATH:
Press Win → search “Edit the system environment variables” → Environment Variables…
Under System variables, select Path → Edit → New → add C:\FFmpeg\bin → OK out of all dialogs.
Close and reopen Terminal/PowerShell so PATH updates.
6) Create an isolated Python environment
Open Windows Terminal (or PowerShell).
Create a new virtual environment folder (any folder is fine). Example:
py -3.11 -m venv C:\whisperx
Activate it:
C:\whisperx\Scripts\activate
Upgrade pip:
python -m pip install --upgrade pip
7) Install PyTorch with CUDA support (cu121 wheels)
On Windows, the official PyTorch CUDA wheels ship with the needed CUDA runtime (cuDNN etc.). The most common, stable choice today is CUDA 12.1 wheels (labelled cu121). Important: Installing PyTorch this way ensures torch is GPU-enabled; letting other packages pull a CPU-only torch is a common mistake.
You should see NMS OK and CUDA available: True and your GPU name.
If CUDA isn’t available here, stop and fix this first (driver, toolkit, or wrong torch wheel). Users often report that installing WhisperX before a proper CUDA-enabled torch causes CPU-only torch to be installed.
8) Install WhisperX
Still inside the same venv:
pip install whisperx
9) Quick GPU test (no diarization)
Put a small audio file next to your terminal (e.g., C:\audio\clip.wav).
Run:
whisperx C:\audio\clip.wav --model small --device cuda --batch_size 4
This should create outputs (.srt, .txt, etc.) and use your GPU.
These usage flags and models are from the WhisperX README examples. GitHub
10) (Optional) Enable Speaker Diarization
WhisperX uses pyannote models that require you to accept licenses on Hugging Face and use a token.
Create a (free) Hugging Face account → generate a read token.
On the model pages noted by WhisperX, click “Accept” the user agreements:
When I set about this I wanted to 3D print a model of a golf course for a friend. "How hard could this be?" I thought. Turns out... harder than I expected. This post walks through the entirety of what I tried through the process. If you want you can skip to the end where I walk you through the steps I followed to make my final print.
The first step is getting the tools for the job. So of course I did a lot of searching to find out what other people had done. I found this Reddit post which suggested using Equator to do the job. Despite putting in my credit card number for a free trial - I couldn't get it to work. (Maybe it was because something was broken and it was in the end of year holiday period.)
Then, I found this post on medium that led me down a number of paths - only one of which I ended up taking in the long run! It suggested using OpenTopography to get the data, LASTools to convert the LIDAR data into a digital elevation model (DEM), and then QGIS to convert the DEM into an .STL for printing.
So first I go to OpenTopography, which provides freely accessible datasets, but... some of them are restricted for educational use only. You need a .edu email address, and I don't have one. So I continue searching.
There are plenty of other locations for this data, and the good news if you are in the US, is that the US Geological Survey has the USGS Lidar Explorer which allows you to find where Lidar coverage is, and actually download the data. You can draw a box around your area of interest (AOI) and it will give you data to download for it.
Now with data in hand, I ran the las2dem tool from LASTools against my dataset. And while I was able to get it to run, I got this warning:
WARNING: unlicensed. over 1.5 million points. inserting black diagonal.
Sure enough, LASTools was able to convert the data into a DEM, but it also made lines across the print which certainly reduced its realism. I fired off an email to the LASTools makers to see if they had any licensing for hobbyist use, but kept moving forward.
So, then I get QGIS installed - which is a free GIS swiss army knife of tools. Spoiler alert, QGIS is where I ended up doing all of the work in the long run. I could drag my DEM into QGIS and see it, generate a "hillshade" which makes it a little easier to visualize, but then also install the plugin called "DEMto3D" which allows you to convert the DEM into an .STL file for 3D printing. I found this terrific video on youtube that walks you through the process of using this plug-in, and I fully recommend you watch the video.
So, I'm through the process - generated a .STL, and printed it, and... was totally underwhelmed. The diagonal line is distracting, the trees are too pointy, the shape is laying at an awkward angle on a base that is square, there is a lot of stuff in the print I don't want... it's just not awesome.
But that's why we keep trying things and learning. So now let's get on with the ACTUAL process I followed for the final product:
Use the USGS Lidar Explorer to download all of the original LIDAR scans that covered my AOI
Import all the LIDAR data into QGIS (it aligned them together since the GPS information for the location was built into each file). In QGIS' advanced processing menu, in the Point cloud data management section, used the merge tool to pull the 6 files covering my area into one single dataset.
In QGIS, rotate the maps so that you get your AOI as rectangular as possible to minimize wasted space.
In QGIS, create a new Shapefile layer and draw yourself a polygon (ideally rectangle) around your AOI. Make sure to toggle editing of your shapefile off and save the file before continuing.
In QGIS' advanced processing menu, in the Point cloud data management section, used the Clip tool to clip my merged dataset down to just my AOI.
In QGIS advanced processing menu, in the Point cloud conversion section, used the Export to raster feature. This did what las2dem did without the ugly diagonal lines. The "with triangulation" mode seems to get higher resolution but that wasn't actually helpful for my 3d print.
Use QGIS DEMto3D plugin to generate the .STL files. (I ended up using the tiling feature to do a large print. You enter the size of the final object, and the tiling will split that object for you.) Note that I found that adjusting the vertical exaggeration last was best using this plugin. I ended up doing 1.3x exaggeration for my golf course print.
Print! I ended up doing standard resolution and printing took a long time and a lot of filament! 10-15% infill is plenty.
(Edited 1/19/2022 with some changes to make sure it works on Buster / other linux distributions with sec-linux. The new stuff is in purple.)
It took me a while but I finally found someone that had solved this. I am linking the solution. However, typing in a password and following it up with the one-time-password (OTP) is *extremely* user unfriendly. Anything that is hard to do to make better security actually makes worse security. Instead my approach protects the private keys with a password, and you then only use the OTP as the user's password each login.
So, here is the process. Assuming you have pivpn already installed and working with an OpenVPN configuration.
Install google authenticator on the pi: sudo apt-get install libpam-google-authenticator
Edit your openvpn server configuration: sudo nano /etc/openvpn/server.conf and add plugin /usr/lib/openvpn/openvpn-plugin-auth-pam.so openvpn (to use google authenticator) and reneg-sec 0 (to not reconnect every x minutes as the password changes every few seconds).
NOTE: This will make this server configuration only work with OTP. If you have accounts that will just be using passwords then you will need to have a separate server configuration and separate port for that. Info on how to do that is here.
NOTE 2: If you are on Buster, or some version of Linux where the openvpn-plugin-auth-pam.so is not in that location, you should link it into that location. For example in Buster, this would be the command you would run to create the appropriate link: sudo ln -s /usr/lib/arm-linux-gnueabihf/openvpn/plugins/openvpn-plugin-auth-pam.so /usr/lib/openvpn/openvpn-plugin-auth-pam.so
Create a pam.d openvpn profile: sudo cp /etc/pam.d/common-account /etc/pam.d/openvpn
Edit it sudo nano /etc/pam.d/openvpn to add this line at the end: auth required pam_google_authenticator.so
Newer versions of linux with sec-linux use a more strict sandboxing config in systemd which interferes with google-authenticator. To get past this, we will need to edit /lib/systemd/system/openvpn@.service and remove this line to make sure it can read the .google_authenticator files in the home directories of the accounts we create: ProtectHome=true
Now run sudo service openvpn restart to reload the conf change.
Now, create your user. For this to work you will use system accounts (accounts you use to log to your raspberry like 'pi'). You can create as many account as you with the sudo adduser usernamecommand. The user's password really doesn't matter. Once you've created the user:
login as the user on the raspberry sudo su - username (replace username with the actual username)
run the google-authenticator command and follow the instructions (save the barcode url for next step, or import it directly on the user's device at that time)
Type exit to get out of that user's shell and return to your own.
Executing google-authenticator adds a file .google_authenticator in the user’s home directory. This file must have no rights except read for the user, so run sudo chmod 400 /home/username/.google_authenticator (change to the correct username)
create a pivpn account with the exact same name as the user : pivpn -a Note: the username must be the same than the system account. (The original directions suggest doing this with no password; It is safer to use a password to protect the private key. The password used here will need to be communicated safely to the user)
edit the freshly created username.ovpn file and add the lines auth-user-pass (to tell the client to request username and password on connection) and reneg-sec 0 (to not reconnect every x minutes as the password changes every few seconds). And comment out the auth-nocache line by putting a # at the front of it. (This will keep the connection from re-negotiating every 60 minutes which is not good for an always-on VPN.)
Now, just install your .OVPN file on your client. (You can save the private key password if your client supports it, or require prompting for it every time.) Use the barcode URL generated earlier to show the QR Code for import into your authenticator app on your mobile device, and profit!
Login with the same username and the OTP as the password. (The private key password being the one used when you created the account with the pivpn -a command.) You're now using multifactor authentication!. Something you know (the private key password) and something you have (your authenticator app which is a one-time-password generator).
1) set up aws to get a public ip which isn't default https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstancesConnecting.html
- Create new VPC
- Create new Internet Gateway (in VPCs), attach it to the new VPC
- Create new Subnet
- On the Route Table tab of new subnet, verify that there is a route with 0.0.0.0/0 as the destination and the internet gateway for your VPC as the target. If not, choose Route Tables->your route table->Edit routes. Choose Add route, use 0.0.0.0/0 as the destination and the internet gateway as the target. For IPv6, choose Add route, use ::/0 as the destination and the internet gateway as the target then save.
Follow the instructions and add the required TXT records in Google Domains... wait a while (at least a couple minutes) before continuing. You should get the certificates.
(More work to be done to let this be automatically renewable - sudo certbot renew should do it).
"chosen body of citizens, whose wisdom may best discern the true interest of their country, and whose patriotism and love of justice will be least likely to sacrifice it to temporary or partial considerations."
Like many people, the discontinuation of Microsoft's Technet Subscriptions made my job at times a little more difficult. It is nice to be able to spin up a server to test something out without worrying about continually paying for it (or remembering to shut it down) on Amazon AWS. And Microsoft made 180-day trials of server operating systems in their Evaluation Center.
In my case, I am duplicating a customer's 24-server system with 7 of my own virtual machines hosted internally. These are just test servers so I can walk people through the exact mouse clicks and commands they need to run to be successful in the unlikely event they call me with a question. For this purpose, buying 7 licenses alone is a non-starter and would make the effort useless.
So I wondered to myself, how can I keep using one of these 180-day trials for more than 180 days. Assuming non-production use, of course. At first I tried just backing up the complete system and restoring it to a newly installed trial. Unfortunately, it also restored the activation status of the machine, and it didn't get me any closer to my goal.
It has taken me a few tries but I've finally found the solution, at least for Windows 2008 R2.
You'll need:
The evaluation CD/DVD of the system you're using.
Available storage space to store complete system backup(s)
Ability to share that storage space through Windows sharing (\\server\share style)
So this is the process that worked for me.
Install the Windows Server Backup feature on the system nearing expiration.
Make a complete Windows System Backup (Bare Metal) to the windows share dedicated for backup storage.
Shut down the system nearing expiration, hopefully for the last time.
Use the evaluation CD/DVD to create a new 180-day trial system. Go through the entire installation process and get to standard Windows. Go ahead and activate it to get the 180 day timer running.
Reboot the new system and hit F8 while it is booting. Choose "Directory Services Restore Mode" as your boot choice. This will boot you into a special version of Safe Mode
Install the Windows Server Backup feature on the new system
Restore from the share. You will get a warning indicating that this installation is not meant for this system. You will get another warning about if communication with the share has problems during restoration your new system may be unusable. Accept both warnings, it's non-production anyway.
Reboot the system as required after the restoration is completed
Reactivate windows on the new system. You should again see 180 days available.
Hopefully this will help someone. Or at least be someplace I remind myself of how I did it.
Remove the file /etc/dropbear/dropbear_rsa_host_key and then reboot the device.
PyUSB Doesn't Exist
Having trouble getting PyUSB or PySerial running on your BeagleBone Black? You've probably seen that you need to run "opkg install python-pyusb". And then you see that there is no such available package.
The answer is a simple one. "opkg update". This will update your list of available packages. Then try the above command and you should have more success.
My friends are having a baby today. They asked me to buy a copy of today's paper so that they could have it to remember what was going on in the world on that date. I did.
Then, I thought, why wouldn't you want to memorialize more than just one local paper to find out what was going on? You see, the awesome Newseum has a site where they share all of today's front pages from over 800 newspapers worldwide. Note that the Newseum indicates the following: Anyone seeking permission to use a front page must credit the Newseum and contact the newspaper directly for permission. U.S. copyright laws apply.
I think fair use doctrine wouldn't have a problem with you saving these images personally in an electronic baby book - but they shouldn't be shared or put up on a web page.
So, here's what I did. I used FireFox browser with the FoxySpider plugin, and a regex-capable text editor (in my case, TextWrangler).
Install FireFox and the FoxySpider plugin if you haven't already. Go into the FoxySpider preferences and uncheck the "Limit gallery to X thumbnails" box.
Go to the Newseum front pages site, and click the link to show All front pages.
Right-click the page and choose FoxySpider - Advanced Filters
In Crawl pages within this URL I modified it slightly to ensure it captured only the linked pages with images - currently this is http://www.newseum.org/todaysfrontpages/hr.asp*
Click Start!
In the generated page of thumbnails, choose "Select all files" from the drop down and click the Download Files button. Choose a folder to save all the files to.
Close the FoxySpider tab after it's done.
Right-click the page again and choose Save Page As... and save it as a .html file in the same folder as you saved the images to. For the type, choose "Web page, complete" so that it keeps the thumbnails.
Open the .html file in your text editor and perform the following replacement (as of today's date, at least):
Save the file. Now you have a local copy of the Newseum's Today's Front Pages page linked to the local copy of each image you've saved using FoxySpider.
Again to be clear this should be for personal, non-commercial uses only. I think it would have been cool to see every front page from the day I was born without having to go to the Library and checking out the microfiche.