As Wendell said, software and specific design are very important factors. The crucial question is always: what is it for, with what software and applications will it be used. nVidia and AMD cards are not designed in the same way, bus width doesn't mean the same thing with AMD/Intel and nVidia. For those using linux, they have discovered that AMD/Intel doesn't unlock the entire GPU memory for buffering or pipelining OpenCL calls when the application that is making the calls is programmed towards nVidia cards (and thus optimized for CUDA logic instead of OpenCL), That has to do with the way AMD/Intel cards use the memory, which is basically like a CPU would use memory, whereas nVidia cards don't follow the normal logic, but use their own system, with less support for Khronos spec API calls. Because the memory bandwidth of AMD cards is bigger, AMD cards will often have less memory available for pipelining workload for streaming processors in compute applications that aren't open source compliant, but they will provide more ray graphics power. In applications that use the industry standard Khronos spec APIs as a guideline, AMD cards perform much better than nVidia cards (on average about 5 to 20 times better), because they can fully utilize the larger bus width and the normal instruction logic. nVidia uses faster memory than AMD, but the data is squeezed through a narrower bus, so the data throughput is about the same or less than with AMD cards. nVidia cards use their own proprietary machine language, called PTX, which stacks up instructions in a proprietary format and parallel process them, which leads to very good benchmarks, but unfortunately, PTX is not compatible with any open source format, so the closest nVidia can come to working with open source formats like OpenCL and OpenGL, is to incorporate a "trojan horse" into an open source compiler (LLVM/Clang, the Apple compiler, which is liberally licensed with an Apache license, because the GNU compiler still doesn't accept the nVidia trojan horses, and hopefully never will because it's a very bad software practice), that compiles closed source binaries into the open source kernel that translate the Khronos spec API calls into PTX in real time, and ignores the non-supported calls. nVidia cards are basically typical Windows/DirectX cards. Intel and AMD graphics cards perform better than nVidia graphics cards in applications that are optimized with the industry standard open source Khronos spec APIs, nVidia benchmarks faster in Windows. In linux, 4k and above resolutions are pretty common, and the only thing missing right now is support for OpenGL 3 in the open source drivers for AMD and Intel. That is still a work in progress, and a final version is expected for mesa 10 release, which is in a couple of months. nVidia has clearly demonstrated that they will not support linux and open source unless linux and open source allow nVidia malware binaries into the open source code, and that they will continue to rely upon directx and CUDA, which allows nVidia to produce cheaper cards and sell them at a higher price because they benchmark higher in windows. AMD cards are much more expensive to make, because higher bus width means more memory modules, which are expensive, but also means a beefier power supply, because fast VRAM uses a lot of power. Add to that the bigger lithography of AMD chips, and higher transistor counts for the sales price, which makes for a larger piece of silicon, which means less units per silicon wafer, which means more expensive production costs... AMD has a very small profit margin, nVidia has a very large one, but the customer gets more for his money with AMD hardware, and in linux, and with HSA coming in a few months, that's a big thing right now. According to NPD, 21% of all laptops sold in the US in 2013 were Chromebooks, which are linux preinstalled machines that have no Windows on them, have no NSA or Microsoft infected BIOS (they all use opens ource coreboot BIOS images), and they contain no binary drivers or binary kernel modules (aka proprietary drivers). Noy you know why Microsoft and nVidia have mounted those huge slurring and sabotage campaigns in 2013 against Google, linux and AMD... do the math, the US is the market with the least linux-acceptance in the world, and in 2013, one if four new laptops was not running windows but linux as PC operating system, and SteamOS wasn't even out yet...
So yeah, bus width doesn't make much difference in closed source sabotageware, but in open source software, it suddenly makes a lot of difference, and with HSA, the difference will become even bigger, as GPU cores will have to access the system memory directly and use the GPU memory to pipeline instructions to boost the system performance, as the VRAM is much faster than the system RAM. At that point, the GPU's will need even more VRAM, or lose performance because they have to use the system RAM more. With HSA systems, for applications like Lightworks or Maya or 3D gaming engines with a lot of next-gen functionality, the typical performance system two years from now will probably have 8 GB of system RAM and 8 GB of VRAM, to get the most out of HSA acceleration and still offer fast 4k graphics, however, a lot of systems (Intel and AMD both are evolving towards APU's) won't have dedicated VRAM anymore, and Intel and AMD will probably offer 512-bit bus width GP-GPUs or co-CPUs for a lower price so that people can put multiple ones in a system to scale the performance, but each card will have less VRAM (which also keeps the price low), and the HSA systems will mainly rely on a large 16 GB or more DDR4 system RAM instead.