Building and Testing GPU Box Kits: Overcoming Initial Failures and Exploring the GIGABYTE V-N4090IXEB-24GD

I previously attempted to assemble a GPU box kit, but it didn’t work due to initial defects. Determined to succeed, I tried again and also managed to borrow a GPU box with a built-in GeForce RTX 4090. I want to share my experience with both challenges.

In my first attempt, I encountered issues with the GPU box kit, so I returned it. However, after seeing a YouTube video showcasing the same kit successfully working, I decided to give it another try. The assembly process was quick since it was my second attempt. I initially tested it with a GeForce GT 1030 (2GB) and a Beelink “SER7” with two USB4 ports. This time it was recognized perfectly, and I was overjoyed.

Encouraged by this success, I proceeded to install a more powerful GPU, the GeForce RTX 4070 Ti, which was left over from a previous installation. Unlike the GeForce GT 1030, the RTX 4070 Ti is highly capable and visually appealing. Despite the numerous cables at the back, the overall setup remained surprisingly compact. I connected it to the Beelink “SER7”, but it was recognized and then booted back when I loaded the driver. I tried connecting it to the “MINISFORUM UM690”, but it didn’t recognize the connection at all. However, when I connected it to the older NUC “NUC10i5FNH” with Thunderbolt 3, it worked flawlessly.

To summarize the compatibility, the Beelink “SER7” and “MINISFORUM UM690” did not successfully recognize the connection, while the NUC10i5FNH with Thunderbolt 3 worked without any issues. Unfortunately, the NUC10i5FNH CPU isn’t strong enough to make it the main computer for this setup. Therefore, I’m faced with the dilemma of not being able to use the ideal PC for this GPU box.

On another note, I recently came across the GIGABYTE “V-N4090IXEB-24GD,” a GPU box equipped with a GeForce RTX 4090. It was announced in the United States and will soon be available in Japan. The price was surprisingly affordable, considering the specifications and the current market prices for the GeForce RTX 4090. The compact design and removable filters on both sides were noteworthy features. The front logo lights up in seven different colors and can be customized using the GIGABYTE CONTROL CENTER.

I tested the GIGABYTE “V-N4090IXEB-24GD” with the three models mentioned earlier, and everything worked smoothly. The driver loaded without any issues, and the GPU was recognized. I also had access to the GIGABYTE CONTROL CENTER, although I won’t be using the overclocking feature.

When it came to benchmark testing, the results were not significantly different between the GeForce RTX 3090 and the RTX 4090. This could be due to the Thunderbolt 3 connection or the limitations of the Ryzen 9 CPU. With PCIe x16 connection, there’s a noticeable difference in performance, but it’s not something I’ll be able to take advantage of in this setup.

In conclusion, while the GIGABYTE “V-N4090IXEB-24GD” offers impressive specifications and performance, the limitations of the Thunderbolt 3 connection and CPU utilization affect its overall efficiency. Regardless, the GPU box is a valuable tool for my AI purposes.

I assembled a GPU box kit before, but it didn’t work due to initial failure, so I tried again. I was also able to borrow a GPU box with a built-in GeForce RTX 4090, so I’d like to present it as well.

GPU box kit challenge again!?

A while ago we introduced a GPU box kit. In the end, it didn’t work properly due to an initial defect and I returned it. I forced a GeForce RTX 3090 into an AKiTiO “Node Titan” and I was enjoying the 24GB of VRAM, but after that, when I searched the internet, I found the same one actually working on YouTube and thought “This is it!” and I decided to buy it again. I took the risk.

The assembly itself is easy as you can see from the article above. As this was my second time, it was completed in no time. First, I attached a GeForce GT 1030 (2GB) and checked the operation. I still had a Beelink “SER7” with two USB4 ports handy, so I used it.

Assembled the kit and installed a GeForce GT 1030 (2GB) to confirm (used from the article above) This time it was recognized perfectly!

This time it worked without any problems. Needless to say, I was delighted.

The next GPU I installed was a GeForce RTX 4070 Ti that was left over after installing a GeForce RTX 3090 in AKiTiO’s “Node Titan”. As you can see, unlike the GeForce GT 1030, it is very powerful. Plus, it’s kind of pointlessly cool (lol). Although there’s a bunch of cables in the back, when you put it together it’s surprisingly compact overall.

From our diagonal in front to our diagonal behind

When I connected it to Beelink “SER7”, it was recognized, but it booted back when I loaded the driver. The result is the same regardless of which of the two USB4 ports are connected. This time, when I connected it to “MINISFORUM UM690”, which is currently an AI machine, it did not recognize the connection itself (USB4). This was a problem, but when I connected it to the NUC “NUC10i5FNH”, which is a bit old but has Thunderbolt 3, it worked without problems.

I managed to connect GeForce RTX 4070 Ti to NUC “NUC10i5FNH”

When summarized in a table, it looks like this.

Beelink “SER7” “MINISFORUM UM690” “NUC10i5FNH” USB4USB4Thunderbolt3GT 1030○ Not recognized and not confirmed RTX 4070 Ti Blueback Not recognized○

Since it was originally advertised as Thunderbolt 3, I have no choice but to give it up, thinking that the reason the USB4 connection doesn’t work is because of the specs. I was considering installing a GeForce RTX 4090 if this was ok, but the result was disappointing.

My plan is to give up on the “MINISFORUM UM690,” which doesn’t recognize the connection in the first place, and the Beelink “SER7,” which shows a blue screen when loading could the driver, works if I change the OS to Linux, but this is not confirmed.

The only working NUC10i5FNH doesn’t use the CPU at all during the generation, so it’s hard to make it the main one. However, buying a top-of-the-line PC with Thunderbolt 3 would be putting the cart before the horse.

On the other hand, what is GIGABYTE “V-N4090IXEB-24GD”?

Meanwhile, when the GIGABYTE “AORUS RTX 4090 GAMING Box”, a GPU box equipped with a GeForce RTX 4090, was announced in the United States (August 3, 2023), the editor asked if it would be available in Japan The item and I ordered it arrived.

The announcement in Japan is September 8, 2023. The product name is GIGABYTE “V-N4090IXEB-24GD”. The selling price at Newegg in the US was $2,998, so I thought it would definitely be over 450,000 yen now, but it was surprisingly cheap at around 398,800 yen. Needless to say, I thought, “Oh!” See the specs in the table below, but this machine is water cooled.

Specifications GIGABYTE “V-N4090IXEB-24GD” GPU GeForce RTX 4090 (OC compatible) Thunderbolt 3 Interface
Gigabit Ethernet
USB 3.0 Type-C, USB 3.0 x 2 Video output port DisplayPort 1.4ax 3, HDMI 2.1ax 1 Built-in power supply 850W, power efficiency over 90% (equivalent to 80 PLUS Gold), PCI-E 8 pin x 4 water-cooled WATERFORCE cooling (large copper plate, aluminum radiator, 2 integrated fans) Accessories Thunderbolt 3 cable 500mm
power cable
Installation guide GIGABYTE CONTROL CENTER Other Agreed Warranty Period 2 years Size/Weight 189 x 302 x 172 mm/5,100g ± 5% Price about 398,800 yen

Currently, the domestic price of GeForce RTX 4090 is around 250,000 yen. However, when it comes to water-cooled models, the price jumps up to 300,000 to 350,000 yen. In other words, if you consider that the 850W power supply unit, case, Thunderbolt 3, Gigabit Ethernet, USB 3.0 Type-C, USB 3.0 x 2 docking station function + 2-year manufacturer’s warranty costs +50,000 to 100,000 yen . not expensive (although of course it is expensive for a peripheral PC).

The first thing that surprised me when I saw it was how compact it was. The GeForce RTX 4090 only has 3 slots and requires a large capacity power supply unit… I was a bit surprised because I had the impression that it was far from compact. However, it weighs around 5.1kg, which is quite heavy. If possible, I would have liked to have something like a handle on the top for carrying it.

Filters are located on both sides and are removable. This is a good point, and it’s something I was worried about as the AKiTiO “Node Titan” also collects dust quickly after a little use. The logo on the front lights up in 7 colors when connected (patterns can be set using the GIGABYTE CONTROL CENTER, described later).

When I checked the connection with the three models above, all were fine. Of course, the driver loaded without any problems, and the GPU was added. As expected, it’s a finished product! Since this is for AI purposes, OC will not be used, but GIGABYTE CONTROL CENTER will also work.


GeForce RTX 3090 vs RTX 4090… “Hmm!?” on the result

Benchmark test now. I used the 512 x 768 benchmark: Ayaka Kamisato, who has been collecting data for some time. The result is as follows, but surprisingly it does not grow. It is only 3 seconds faster by producing 10 pieces.

RTX 3060 (12GB) 4.33 (it) / 51.8 seconds (10 images) Colab / Standard (T4) 2.99 (it) / 66 seconds (10 images) Colab Pro / Premium (A100) 6.24 (it) / 32 seconds (10 images ) ) GeForce RTX 3070 Ti (USB4 connection) 6.32(it)/37 seconds (10 photos) GeForce RTX 4070 Ti (USB4 connection) 8.83(it)/27 seconds (10 photos)GeForce RTX 3090 (USB4 connection(it) 8.77 ) ) /29 seconds (10 images)GeForce RTX 4090 (USB4 connection) 9.74(it)/26 seconds (10 images)

Is this a limitation of the Thunderbolt 3 connection, or is it because the CPU is also used for a generation, so it reaches a plateau with the mobile Ryzen 9? It is not clear what the cause is, but according to the information on the benchmark test posting site In the case of PCIe x16 connection, there is an overwhelming difference between GeForce RTX 3090: 23.7 seconds and GeForce RTX 4090: 13.7 seconds (0.57).

What I noticed when reviewing the “spell” was that there were 83 tokens. Fixed Diffusion is basically 75tokens, after which the influence is reset every 75 tokens (words at the beginning are stronger). Apparently, control will be transferred to the CPU side once on the 76th token. Therefore, CPU performance seems to be affected. As a test, I deleted “black bow”, “cate”, and “genshin” from the back and made it exactly 75 tokens…

GeForce RTX 3090 (USB4 connection) 10.93(it)/23 seconds (10 photos)GeForce RTX 4090 (USB4 connection) 14.47(it)/18 seconds (10 photos)

In this way, the processing time was almost the same as the site that posts the benchmark test. However, despite being faster in terms of time, the ratio of GeForce RTX 3090 vs RTX 4090 is 0.78, which is not as high as 0.57.

Double check with SDXL

Since SD 1.5 was in the above state, I ran a benchmark test with SDXL. The three apps I used were “AUTOMATIC1111,” “StableSwarmUI,” and “Fooocus-MRE.” Note that AUTOMATIC1111 is compatible with SDXL in v1.6.0, and can produce both SD and SDXL. The latter two are limited to SDXL.


The benchmark test uses the stimulus below, and other settings such as Sampling method, Sampling steps, CFG scale, and Checkpoint match as much as possible. The time is taken from what is on the screen as a numerical value, such as the display on the progress bar or the displayed time, so variations depend on the app. Therefore, it is not a comparison between the generation speeds of the three applications, but rather the difference between the same application using different GPUs.

spelling test

Professional photo of beautiful Japanese woman like k-pop idol, 20yo, solo, medium breasts, slim, clear eyes, cafe, slightly smile, bokeh,,

Negative prompt:
(worst quality), drawing, 3d, 2d, painting, cartoons, (distorted | distorted | disfigured: 1.2), (hands mutated AND fingers: 1.2), dirty hand,

Resolution: 832×1,216
Sampling method: DPM ++ 3M SDE Karras
Sampling steps: 20
CFG Rating: 4
Checkpoint: firsttunnerXL

10 time batch on the GeForce RTX 4090 box

AUTOMATIC1111 51 seconds (0.66)
StableSwarmUI 63 seconds (0.73)
Fooocus-MRE 30 seconds (3 seconds x 10) Production time that appears on the progress bar (0.6)

Batch 10 times on the GeForce RTX 3090 box

AUTOMATIC1111 77 seconds
StableSwarmUI 86 seconds
Fooocus-MRE 50 seconds (5 seconds x 10) Production time shown in progress bar

Looking at the results, the maximum is 0.6. 0.57 for the previous PCIe x16 connection. This 0.03 (0.578, so close to 0.2) may appear as overhead due to the Thunderbolt 3 connection.

As I wrote in a previous article, NVIDIA driver versions after 531.61 support VRAM offloading, so if the VRAM capacity is exceeded, the main memory will be used. Also, since the switch is happening at a fairly fast timing, processing that used to be contained in VRAM is now being moved to main memory… In any case, once main memory is use, it becomes more than 10 times slower, and it is not clear what the fast GPU is used for (although it is probably good for LLM and learning). There is no way to turn it off using the CLI, so I highly recommend 531.61 for generating images.

As mentioned above, even with a Thunderbolt 3 connection, there was a difference of up to 0.6 between GeForce RTX 3090 and GeForce RTX 4090. I’ve been playing in this environment for a few days, but recently I’ve been mostly using SDXL, so it feels like it’s almost double the speed. In fact, the SDXL image I usually make is

[00:08<00:00, 2.97it/s] (RTX 4070 Ti + NUC10i5FNH/reference)[00:07<00:00, 3.20it/s] (RTX 3090 + Ryzen 9)[00:04<00:00, 5.95it/s] (RTX 4090 + Ryzen 9)

There is a difference like this. GeForce RTX 4070 Ti and GeForce RTX 3090 are almost the same, but the former is slower because the CPU affects it. In any case, if it is almost twice different from the GeForce RTX 4090, the operating feeling will be completely different. If it’s longer than 5 seconds, I feel like I’m waiting, but if it’s less than that, I don’t feel like I’m waiting. In addition, it is water cooled (although there are 2 fans), so it is quiet. There is no loud noise during continuous operation.

I would love to have one, but the problem for me is the price. Although it is not expensive in terms of composition, it simply feels that you need to be “spiritual” to pay almost 400,000 yen. Right now, I feel like the AKiTiO Node Titan + RTX 4070 Ti has finally paid for itself with manuals, but it’s twice the price. When it comes to work, time efficiency is the key, so it would be nice if there was something relevant to the job…

By the way, even if you build a PC with a regular GeForce RTX 4090, the performance is about the same. However, large casings are troublesome, so I would like to avoid them if possible. If you think about it, I think I bought a decent camera and lens (lol). to continue it?

#Colofn #afreolaidd #Kazuhisa #NishikawaFull #days #GPU #boxes #kit #experience #power #GIGABYTE #VN4090IXEB24GD #Watch