Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux?2019 Community Moderator ElectionR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU

Theorems that impeded progress

Was any UN Security Council vote triple-vetoed?

Does detail obscure or enhance action?

Malcev's paper "On a class of homogeneous spaces" in English

What's the point of deactivating Num Lock on login screens?

What does it mean to describe someone as a butt steak?

How old can references or sources in a thesis be?

Java Casting: Java 11 throws LambdaConversionException while 1.8 does not

strTok function (thread safe, supports empty tokens, doesn't change string)

Why "Having chlorophyll without photosynthesis is actually very dangerous" and "like living with a bomb"?

Why does Kotter return in Welcome Back Kotter?

Can you really stack all of this on an Opportunity Attack?

Paid for article while in US on F-1 visa?

Do I have a twin with permutated remainders?

How to format long polynomial?

"You are your self first supporter", a more proper way to say it

Codimension of non-flat locus

Is it legal for company to use my work email to pretend I still work there?

Why is 150k or 200k jobs considered good when there's 300k+ births a month?

What is a clear way to write a bar that has an extra beat?

Add text to same line using sed

Client team has low performances and low technical skills: we always fix their work and now they stop collaborate with us. How to solve?

Is it inappropriate for a student to attend their mentor's dissertation defense?

How is it possible to have an ability score that is less than 3?



Is the PNY NVIDIA Quadro RTX 4000 a good GPU for Machine Learning on Linux?



2019 Community Moderator ElectionR: machine learning on GPUPublic cloud GPU support for TensorFlowWhich deep learning framework have support for gtx580 GPU?Input Pipeline for Tensorflow on GPUWhat is the best hardware/GPU for deep learning?Is there any single disadvantage to use GPU in deep learning?Trying to use GPU of laptop for TensorFlowShould I use GPU or CPU for inference?External GPU vs. internal GPU for machine learningCNN computing time on good CPU vs cheap GPU










1












$begingroup$


As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.



I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?



LINUX X64 (AMD64/EM64T) DISPLAY DRIVER



This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?



Any guidance would be greatly appreciated.










share|improve this question









New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I can also refer that PNY does not have a good cooling system. You have to take that in mind.
    $endgroup$
    – Media
    6 hours ago










  • $begingroup$
    Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
    $endgroup$
    – crayden
    6 hours ago










  • $begingroup$
    I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
    $endgroup$
    – crayden
    6 hours ago















1












$begingroup$


As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.



I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?



LINUX X64 (AMD64/EM64T) DISPLAY DRIVER



This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?



Any guidance would be greatly appreciated.










share|improve this question









New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$











  • $begingroup$
    I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I can also refer that PNY does not have a good cooling system. You have to take that in mind.
    $endgroup$
    – Media
    6 hours ago










  • $begingroup$
    Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
    $endgroup$
    – crayden
    6 hours ago










  • $begingroup$
    I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
    $endgroup$
    – crayden
    6 hours ago













1












1








1





$begingroup$


As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.



I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?



LINUX X64 (AMD64/EM64T) DISPLAY DRIVER



This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?



Any guidance would be greatly appreciated.










share|improve this question









New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$




As a web developer, I am growing increasingly interested in data science/machine learning, enough that I have decided to build a lab at home.



I have discovered the Quadro RTX 4000, and am wondering how well it would run ML frameworks on Ubuntu Linux. Are the correct drivers available on Linux so that this card can take advantage of ML frameworks?



LINUX X64 (AMD64/EM64T) DISPLAY DRIVER



This is the only driver that I could find, but it is a "Display Driver", so I am not sure if that enables ML frameworks to use this GPU for acceleration. Will it work for Intel based processors?



Any guidance would be greatly appreciated.







gpu linux






share|improve this question









New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 6 hours ago







crayden













New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 7 hours ago









craydencrayden

1063




1063




New contributor




crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






crayden is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











  • $begingroup$
    I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I can also refer that PNY does not have a good cooling system. You have to take that in mind.
    $endgroup$
    – Media
    6 hours ago










  • $begingroup$
    Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
    $endgroup$
    – crayden
    6 hours ago










  • $begingroup$
    I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
    $endgroup$
    – crayden
    6 hours ago
















  • $begingroup$
    I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I can also refer that PNY does not have a good cooling system. You have to take that in mind.
    $endgroup$
    – Media
    6 hours ago










  • $begingroup$
    Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
    $endgroup$
    – crayden
    6 hours ago










  • $begingroup$
    I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
    $endgroup$
    – Media
    6 hours ago











  • $begingroup$
    I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
    $endgroup$
    – crayden
    6 hours ago















$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago





$begingroup$
I've used 2000 version and the major point is that it does not have a good memroy. $5GB$ is not appropriate for DL tasks. If you can afford it, buy a 2080 which is perfect. I don't know the memory of 4000 but the 2000's memory is very limiting and you cannot train big models on it. But the gpu itself is roughly a powerful one.
$endgroup$
– Media
6 hours ago













$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago




$begingroup$
I can also refer that PNY does not have a good cooling system. You have to take that in mind.
$endgroup$
– Media
6 hours ago












$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago




$begingroup$
Thanks for your feedback @Media. Would you be able to recommend a card that would work well for getting up and running with ML/Deep learning?
$endgroup$
– crayden
6 hours ago












$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago





$begingroup$
I guess 2080ti is the best at the moment due to its power and new tensor modules that have been introduced inside them for DL/ML tasks. It is also far cheaper than titan.
$endgroup$
– Media
6 hours ago













$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago




$begingroup$
I noticed you are referring to the former 2000/5GB version of the Quadro. The new Quadro RTX line is based on the Turing architecture, and includes special tensor cores for acceleration. This should make a huge difference between the 2000 version you have used, and the new RTX/Turning based cards?
$endgroup$
– crayden
6 hours ago










1 Answer
1






active

oldest

votes


















0












$begingroup$

You seem to be looking at the latest Quatro 4000, which has the following compute rating:



enter image description here



You can find the complete list here for all Nvidia GPUs.



While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.



The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:



enter image description here



The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.



It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.






share|improve this answer









$endgroup$












  • $begingroup$
    Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
    $endgroup$
    – n1k31t4
    5 hours ago










  • $begingroup$
    What is an adequate amount of memory and CUDA cores?
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
    $endgroup$
    – n1k31t4
    4 hours ago











Your Answer





StackExchange.ifUsing("editor", function ()
return StackExchange.using("mathjaxEditing", function ()
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix)
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
);
);
, "mathjax-editing");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "557"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);






crayden is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

You seem to be looking at the latest Quatro 4000, which has the following compute rating:



enter image description here



You can find the complete list here for all Nvidia GPUs.



While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.



The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:



enter image description here



The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.



It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.






share|improve this answer









$endgroup$












  • $begingroup$
    Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
    $endgroup$
    – n1k31t4
    5 hours ago










  • $begingroup$
    What is an adequate amount of memory and CUDA cores?
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
    $endgroup$
    – n1k31t4
    4 hours ago















0












$begingroup$

You seem to be looking at the latest Quatro 4000, which has the following compute rating:



enter image description here



You can find the complete list here for all Nvidia GPUs.



While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.



The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:



enter image description here



The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.



It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.






share|improve this answer









$endgroup$












  • $begingroup$
    Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
    $endgroup$
    – n1k31t4
    5 hours ago










  • $begingroup$
    What is an adequate amount of memory and CUDA cores?
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
    $endgroup$
    – n1k31t4
    4 hours ago













0












0








0





$begingroup$

You seem to be looking at the latest Quatro 4000, which has the following compute rating:



enter image description here



You can find the complete list here for all Nvidia GPUs.



While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.



The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:



enter image description here



The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.



It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.






share|improve this answer









$endgroup$



You seem to be looking at the latest Quatro 4000, which has the following compute rating:



enter image description here



You can find the complete list here for all Nvidia GPUs.



While it seems to have an impressive score of 7.5 (the same as the RTX 20180ti), the main draw back the memory of 8Gb. This is definitely enough to get started with ML/DL and will allow you to do many things. However, memory is often the thing that will slow you down and limit your models.



The reason is that a large model will require large number of parameters. Take a look at the following table (models included in Keras), where you can see the number of parameters each model requires:



enter image description here



The issue is that the more parameters you have, the more memory you need and so the smaller the batch size you are able to use during training. There are many arguments for larger vs. smaller batch sizes - but having less memory will force you to still to smaller batch sizes when using large models.



It seems from Nvidia's marketing, that the Quadro product line is more aimed towards creative developers (films/image editing etc.), whereas the Geforce collection is for gaming an AI. This highlights that Quadro is not necessarily optimised for fast computation.







share|improve this answer












share|improve this answer



share|improve this answer










answered 5 hours ago









n1k31t4n1k31t4

6,4912421




6,4912421











  • $begingroup$
    Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
    $endgroup$
    – n1k31t4
    5 hours ago










  • $begingroup$
    What is an adequate amount of memory and CUDA cores?
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
    $endgroup$
    – n1k31t4
    4 hours ago
















  • $begingroup$
    Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
    $endgroup$
    – n1k31t4
    5 hours ago










  • $begingroup$
    What is an adequate amount of memory and CUDA cores?
    $endgroup$
    – crayden
    5 hours ago










  • $begingroup$
    How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
    $endgroup$
    – n1k31t4
    4 hours ago















$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago




$begingroup$
Nvidia's marketing makes be believe this card is better than GeForce for AI? From the product page: "Deep learning frameworks such as Caffe2, MXNet, CNTK, TensorFlow, and others deliver dramatically faster training times and higher multi-node training performance. GPU accelerated libraries such as cuDNN, cuBLAS, and TensorRT deliver higher performance for both deep learning inference and High Performance Computing (HPC) applications." I thought GeForce was optimized for gaming, in contrast.
$endgroup$
– crayden
5 hours ago












$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago




$begingroup$
Perhaps they are starting to push in that direction - they have the same text everywhere, but Geforce has generally been the product line to go for. You care about number of cuda cores, amount of memory and the transfer rate of that memory. Then just find the best combination of those factors that your budget allows.
$endgroup$
– n1k31t4
5 hours ago












$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago




$begingroup$
What is an adequate amount of memory and CUDA cores?
$endgroup$
– crayden
5 hours ago












$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago




$begingroup$
How long is a piece of string? ;) if you want to work with images/videos, the more the better. Working with text can be less memory intensive and something like stock market data is not memory hungry. If you get the Quadro, an RTX or a Titan - it is likely that the human will be the slowest link. Just don't work with a CPU and you'll be fine.
$endgroup$
– n1k31t4
4 hours ago










crayden is a new contributor. Be nice, and check out our Code of Conduct.









draft saved

draft discarded


















crayden is a new contributor. Be nice, and check out our Code of Conduct.












crayden is a new contributor. Be nice, and check out our Code of Conduct.











crayden is a new contributor. Be nice, and check out our Code of Conduct.














Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48718%2fis-the-pny-nvidia-quadro-rtx-4000-a-good-gpu-for-machine-learning-on-linux%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result