Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux?2019 Community Moderator ElectionR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU
Theorems that impeded progress
Was any UN Security Council vote triple-vetoed?
Does detail obscure or enhance action?
Malcev's paper "On a class of homogeneous spaces" in English
What's the point of deactivating Num Lock on login screens?
What does it mean to describe someone as a butt steak?
How old can references or sources in a thesis be?
Java Casting: Java 11 throws LambdaConversionException while 1.8 does not
strTok function (thread safe, supports empty tokens, doesn't change string)
Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?
Why does Kotter return in Welcome Back Kotter?
Can you really stack all of this on an Opportunity Attack?
Paid for article while in US on F-1 visa?
Do I have a twin with permutated remainders?
How to format long polynomial?
"You are your self first supporter", a more proper way to say it
Codimension of non-flat locus
Is it legal for company to use my work email to pretend I still work there?
Why is 150k or 200k jobs considered good when there's 300k+ births a month?
What is a clear way to write a bar that has an extra beat?
Add text to same line using sed
Client team has low performances and low technical skills: we always fix their work and now they stop collaborate with us. How to solve?
Is it inappropriate for a student to attend their mentor's dissertation defense?
How is it possible to have an ability score that is less than 3?
Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux?
2019 Community Moderator ElectionR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
New contributor
$endgroup$
add a comment |
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
New contributor
$endgroup$
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago
add a comment |
$begingroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
New contributor
$endgroup$
As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.
I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?
LINUX X64 (AMD64/EM64T) DISPLAY DRIVER
This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?
Any guidance would be greatly appreciated.
gpu linux
gpu linux
New contributor
New contributor
edited 6 hours ago
crayden
New contributor
asked 7 hours ago
craydencrayden
1063
1063
New contributor
New contributor
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago
add a comment |
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");
StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);
else
createEditor();
);
function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);
);
crayden is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
add a comment |
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
add a comment |
$begingroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
$endgroup$
You seem to be looking at the latest Quatro 4000, which has the following compute rating:
You can find the complete list here for all Nvidia GPUs.
While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.
The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:
The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.
It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.
answered 5 hours ago
n1k31t4n1k31t4
6,4912421
6,4912421
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
add a comment |
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago
add a comment |
crayden is a new contributor. Be nice, and check out our Code of Conduct.
crayden is a new contributor. Be nice, and check out our Code of Conduct.
crayden is a new contributor. Be nice, and check out our Code of Conduct.
crayden is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago
$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago
$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago
$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago
$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago