Deep learning(MLP) on multiclass classification. Model learns only one class Announcing the arrival of Valued Associate #679: Cesar Manara Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern) 2019 Moderator Election Q&A - Questionnaire 2019 Community Moderator Election ResultsNeural net learning only one class?Random Forest Multiclass ClassificationEvaluate a model based on precision for multi class classificationdata pre-processing before feeding into a deep learning modelUnblanced classes: classifier only predict one classHow can I improve the recall of a certain class in a multiclass-classification resultsolving multi-class imbalance classification using smote and OSSValidation loss increases and validation accuracy decreasesMulticlass class classification for text documentXGBoost multiclass class balancing using weight parameter

How to override model in magento2?

Is the Standard Deduction better than Itemized when both are the same amount?

Why light coming from distant stars is not discrete?

Why are Kinder Surprise Eggs illegal in the USA?

What would be the ideal power source for a cybernetic eye?

Why am I getting the error "non-boolean type specified in a context where a condition is expected" for this request?

Is it ethical to give a final exam after the professor has quit before teaching the remaining chapters of the course?

Using et al. for a last / senior author rather than for a first author

What does F' and F" mean?

How do I stop a creek from eroding my steep embankment?

English words in a non-english sci-fi novel

Can a USB port passively 'listen only'?

When were vectors invented?

Why are both D and D# fitting into my E minor key?

What does this icon in iOS Stardew Valley mean?

Should I discuss the type of campaign with my players?

Using audio cues to encourage good posture

Should I use a zero-interest credit card for a large one-time purchase?

How to find all the available tools in mac terminal?

How does the particle を relate to the verb 行く in the structure「A を + B に行く」?

Okay to merge included columns on otherwise identical indexes?

Why is "Consequences inflicted." not a sentence?

String `!23` is replaced with `docker` in command line

Why did the IBM 650 use bi-quinary?



Deep learning(MLP) on multiclass classification. Model learns only one class



Announcing the arrival of Valued Associate #679: Cesar Manara
Planned maintenance scheduled April 17/18, 2019 at 00:00UTC (8:00pm US/Eastern)
2019 Moderator Election Q&A - Questionnaire
2019 Community Moderator Election ResultsNeural net learning only one class?Random Forest Multiclass ClassificationEvaluate a model based on precision for multi class classificationdata pre-processing before feeding into a deep learning modelUnblanced classes: classifier only predict one classHow can I improve the recall of a certain class in a multiclass-classification resultsolving multi-class imbalance classification using smote and OSSValidation loss increases and validation accuracy decreasesMulticlass class classification for text documentXGBoost multiclass class balancing using weight parameter










0












$begingroup$


I am new to deep learning. I have imbalanced class data. I used one hot encoding and scaling to preprocess my data. I have used adamoptimizer as optimizer function and sparse categorical crossentropy as my lass function. The model always gives high accuracy on one class with very low accuracy on other classes. Here is my code:



`



#separating test data according to classes
data_test = data_final[data_final.YEAR.isin(2018)]
data_test_0 = data_test[data_test['DELAY_CLASS']==0]
test_labels_0 = data_test_0.pop('DELAY_CLASS')
data_test_1 = data_test[data_test['DELAY_CLASS']==1]
test_labels_1 = data_test_1.pop('DELAY_CLASS')
data_test_2 = data_test[data_test['DELAY_CLASS']==2]
test_labels_2 = data_test_2.pop('DELAY_CLASS')
data_test_3 = data_test[data_test['DELAY_CLASS']==3]
test_labels_3 = data_test_3.pop('DELAY_CLASS')


#Extracting continuous columns from training data
data_train = data_train[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

#Extracting continuous columns from testing data
data_test = data_test[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]



print("reached here")

#SMOTE
sm = SMOTE(random_state=2)
ad = ADASYN(random_state=2)
data_train, train_labels = sm.fit_sample(data_train, train_labels)

data_train = pd.DataFrame(data_train)
data_train = data_train.rename(columns = 0:'MONTH',1:'DAY_OF_MONTH',2:'DAY_OF_WEEK',3:'Dep_Hour',
4:'Arr_Hour', 5:'CRS_ELAPSED_TIME', 6:'DISTANCE',
7:'traffic',8:'O_SurfaceTemperatureFahrenheit',9:'O_CloudCoveragePercent',
10:'O_WindSpeedMph',11:'O_PrecipitationPreviousHourInches',12:'O_SnowfallInches',
13:'D_SurfaceTemperatureFahrenheit',14:'D_CloudCoveragePercent',15:'D_WindSpeedMph',
16:'D_PrecipitationPreviousHourInches',17:'D_SnowfallInches',18:'Bird_Strike')

#taking only continuous columns
cols = ['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']

#scaling
train_mean = data_train[cols].mean(axis=0)
train_std = data_train[cols].std(axis=0)
data_train[cols] = (data_train[cols] - train_mean) / train_std
data_test[cols] = (data_test[cols] - train_mean) / train_std
rain_labels = pd.Series(train_labels)

#taking continuous columns from test separated data
data_test_0 = data_test_0[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

data_test_1 = data_test_1[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK','Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

data_test_2 = data_test_2[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

data_test_3 = data_test_3[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

#my model
def build_model():
model = keras.Sequential([
layers.Dense(100, activation = 'sigmoid', input_shape=[len(data_train.keys())]),
#layers.Dropout(0.5),
layers.Dense(50, activation = 'softplus'),
#layers.Dropout(0.3),
layers.Dense(25, activation = 'sigmoid'),
#layers.Dropout(0.2),
layers.Dense(4, activation = 'softmax')
])

model.compile(loss='sparse_categorical_crossentropy',#with binary crossentropy use sigmoid and 1 output neuron
optimizer= tf.train.AdamOptimizer(0.001),
metrics=['accuracy'])
return model

model = build_model()
model.fit(data_train, train_labels, epochs=5, batch_size=128)



test_loss, test_acc = model.evaluate(data_test_0, test_labels_0)
print(test_acc)
test_loss, test_acc = model.evaluate(data_test_1, test_labels_1)
print(test_acc)
test_loss, test_acc = model.evaluate(data_test_2, test_labels_2)
print(test_acc)
test_loss, test_acc = model.evaluate(data_test_3, test_labels_3)
print(test_acc)


`



The training data is flights data of 2016 and 2017 and testing data is of 2018. I have separated classes from testing data to see the class wise accuracy of testing data.



The output is:



Epoch 1/5
1990363/1990363 [==============================] - 17s 8us/step - loss: 1.3231 - acc: 0.3466
Epoch 2/5
1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2799 - acc: 0.3821
Epoch 3/5
1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2634 - acc: 0.3939
Epoch 4/5
1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2519 - acc: 0.4013
Epoch 5/5
1990363/1990363 [==============================] - 16s 8us/step - loss: 1.2445 - acc: 0.4068

Class 0:
44929/44929 [==============================] - 1s 12us/step
0.027710387500278218
Class 1:
10668/10668 [==============================] - 0s 11us/step
0.015935508061492312
Class 2:
33204/33204 [==============================] - 0s 9us/step
0.8956149861318866
Class 3:
274983/274983 [==============================] - 2s 9us/step
0.035293090845941046


The output remains somewhat same if I use adasyn instead of SMOTE or change layers and activation functions. Please help me out.
Thanks in advance.










share|improve this question









New contributor




Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$
















    0












    $begingroup$


    I am new to deep learning. I have imbalanced class data. I used one hot encoding and scaling to preprocess my data. I have used adamoptimizer as optimizer function and sparse categorical crossentropy as my lass function. The model always gives high accuracy on one class with very low accuracy on other classes. Here is my code:



    `



    #separating test data according to classes
    data_test = data_final[data_final.YEAR.isin(2018)]
    data_test_0 = data_test[data_test['DELAY_CLASS']==0]
    test_labels_0 = data_test_0.pop('DELAY_CLASS')
    data_test_1 = data_test[data_test['DELAY_CLASS']==1]
    test_labels_1 = data_test_1.pop('DELAY_CLASS')
    data_test_2 = data_test[data_test['DELAY_CLASS']==2]
    test_labels_2 = data_test_2.pop('DELAY_CLASS')
    data_test_3 = data_test[data_test['DELAY_CLASS']==3]
    test_labels_3 = data_test_3.pop('DELAY_CLASS')


    #Extracting continuous columns from training data
    data_train = data_train[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
    'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

    #Extracting continuous columns from testing data
    data_test = data_test[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
    'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]



    print("reached here")

    #SMOTE
    sm = SMOTE(random_state=2)
    ad = ADASYN(random_state=2)
    data_train, train_labels = sm.fit_sample(data_train, train_labels)

    data_train = pd.DataFrame(data_train)
    data_train = data_train.rename(columns = 0:'MONTH',1:'DAY_OF_MONTH',2:'DAY_OF_WEEK',3:'Dep_Hour',
    4:'Arr_Hour', 5:'CRS_ELAPSED_TIME', 6:'DISTANCE',
    7:'traffic',8:'O_SurfaceTemperatureFahrenheit',9:'O_CloudCoveragePercent',
    10:'O_WindSpeedMph',11:'O_PrecipitationPreviousHourInches',12:'O_SnowfallInches',
    13:'D_SurfaceTemperatureFahrenheit',14:'D_CloudCoveragePercent',15:'D_WindSpeedMph',
    16:'D_PrecipitationPreviousHourInches',17:'D_SnowfallInches',18:'Bird_Strike')

    #taking only continuous columns
    cols = ['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']

    #scaling
    train_mean = data_train[cols].mean(axis=0)
    train_std = data_train[cols].std(axis=0)
    data_train[cols] = (data_train[cols] - train_mean) / train_std
    data_test[cols] = (data_test[cols] - train_mean) / train_std
    rain_labels = pd.Series(train_labels)

    #taking continuous columns from test separated data
    data_test_0 = data_test_0[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

    data_test_1 = data_test_1[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK','Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

    data_test_2 = data_test_2[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
    'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

    data_test_3 = data_test_3[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
    'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
    'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

    #my model
    def build_model():
    model = keras.Sequential([
    layers.Dense(100, activation = 'sigmoid', input_shape=[len(data_train.keys())]),
    #layers.Dropout(0.5),
    layers.Dense(50, activation = 'softplus'),
    #layers.Dropout(0.3),
    layers.Dense(25, activation = 'sigmoid'),
    #layers.Dropout(0.2),
    layers.Dense(4, activation = 'softmax')
    ])

    model.compile(loss='sparse_categorical_crossentropy',#with binary crossentropy use sigmoid and 1 output neuron
    optimizer= tf.train.AdamOptimizer(0.001),
    metrics=['accuracy'])
    return model

    model = build_model()
    model.fit(data_train, train_labels, epochs=5, batch_size=128)



    test_loss, test_acc = model.evaluate(data_test_0, test_labels_0)
    print(test_acc)
    test_loss, test_acc = model.evaluate(data_test_1, test_labels_1)
    print(test_acc)
    test_loss, test_acc = model.evaluate(data_test_2, test_labels_2)
    print(test_acc)
    test_loss, test_acc = model.evaluate(data_test_3, test_labels_3)
    print(test_acc)


    `



    The training data is flights data of 2016 and 2017 and testing data is of 2018. I have separated classes from testing data to see the class wise accuracy of testing data.



    The output is:



    Epoch 1/5
    1990363/1990363 [==============================] - 17s 8us/step - loss: 1.3231 - acc: 0.3466
    Epoch 2/5
    1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2799 - acc: 0.3821
    Epoch 3/5
    1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2634 - acc: 0.3939
    Epoch 4/5
    1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2519 - acc: 0.4013
    Epoch 5/5
    1990363/1990363 [==============================] - 16s 8us/step - loss: 1.2445 - acc: 0.4068

    Class 0:
    44929/44929 [==============================] - 1s 12us/step
    0.027710387500278218
    Class 1:
    10668/10668 [==============================] - 0s 11us/step
    0.015935508061492312
    Class 2:
    33204/33204 [==============================] - 0s 9us/step
    0.8956149861318866
    Class 3:
    274983/274983 [==============================] - 2s 9us/step
    0.035293090845941046


    The output remains somewhat same if I use adasyn instead of SMOTE or change layers and activation functions. Please help me out.
    Thanks in advance.










    share|improve this question









    New contributor




    Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$














      0












      0








      0





      $begingroup$


      I am new to deep learning. I have imbalanced class data. I used one hot encoding and scaling to preprocess my data. I have used adamoptimizer as optimizer function and sparse categorical crossentropy as my lass function. The model always gives high accuracy on one class with very low accuracy on other classes. Here is my code:



      `



      #separating test data according to classes
      data_test = data_final[data_final.YEAR.isin(2018)]
      data_test_0 = data_test[data_test['DELAY_CLASS']==0]
      test_labels_0 = data_test_0.pop('DELAY_CLASS')
      data_test_1 = data_test[data_test['DELAY_CLASS']==1]
      test_labels_1 = data_test_1.pop('DELAY_CLASS')
      data_test_2 = data_test[data_test['DELAY_CLASS']==2]
      test_labels_2 = data_test_2.pop('DELAY_CLASS')
      data_test_3 = data_test[data_test['DELAY_CLASS']==3]
      test_labels_3 = data_test_3.pop('DELAY_CLASS')


      #Extracting continuous columns from training data
      data_train = data_train[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      #Extracting continuous columns from testing data
      data_test = data_test[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]



      print("reached here")

      #SMOTE
      sm = SMOTE(random_state=2)
      ad = ADASYN(random_state=2)
      data_train, train_labels = sm.fit_sample(data_train, train_labels)

      data_train = pd.DataFrame(data_train)
      data_train = data_train.rename(columns = 0:'MONTH',1:'DAY_OF_MONTH',2:'DAY_OF_WEEK',3:'Dep_Hour',
      4:'Arr_Hour', 5:'CRS_ELAPSED_TIME', 6:'DISTANCE',
      7:'traffic',8:'O_SurfaceTemperatureFahrenheit',9:'O_CloudCoveragePercent',
      10:'O_WindSpeedMph',11:'O_PrecipitationPreviousHourInches',12:'O_SnowfallInches',
      13:'D_SurfaceTemperatureFahrenheit',14:'D_CloudCoveragePercent',15:'D_WindSpeedMph',
      16:'D_PrecipitationPreviousHourInches',17:'D_SnowfallInches',18:'Bird_Strike')

      #taking only continuous columns
      cols = ['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']

      #scaling
      train_mean = data_train[cols].mean(axis=0)
      train_std = data_train[cols].std(axis=0)
      data_train[cols] = (data_train[cols] - train_mean) / train_std
      data_test[cols] = (data_test[cols] - train_mean) / train_std
      rain_labels = pd.Series(train_labels)

      #taking continuous columns from test separated data
      data_test_0 = data_test_0[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_1 = data_test_1[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK','Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_2 = data_test_2[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_3 = data_test_3[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      #my model
      def build_model():
      model = keras.Sequential([
      layers.Dense(100, activation = 'sigmoid', input_shape=[len(data_train.keys())]),
      #layers.Dropout(0.5),
      layers.Dense(50, activation = 'softplus'),
      #layers.Dropout(0.3),
      layers.Dense(25, activation = 'sigmoid'),
      #layers.Dropout(0.2),
      layers.Dense(4, activation = 'softmax')
      ])

      model.compile(loss='sparse_categorical_crossentropy',#with binary crossentropy use sigmoid and 1 output neuron
      optimizer= tf.train.AdamOptimizer(0.001),
      metrics=['accuracy'])
      return model

      model = build_model()
      model.fit(data_train, train_labels, epochs=5, batch_size=128)



      test_loss, test_acc = model.evaluate(data_test_0, test_labels_0)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_1, test_labels_1)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_2, test_labels_2)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_3, test_labels_3)
      print(test_acc)


      `



      The training data is flights data of 2016 and 2017 and testing data is of 2018. I have separated classes from testing data to see the class wise accuracy of testing data.



      The output is:



      Epoch 1/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.3231 - acc: 0.3466
      Epoch 2/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2799 - acc: 0.3821
      Epoch 3/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2634 - acc: 0.3939
      Epoch 4/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2519 - acc: 0.4013
      Epoch 5/5
      1990363/1990363 [==============================] - 16s 8us/step - loss: 1.2445 - acc: 0.4068

      Class 0:
      44929/44929 [==============================] - 1s 12us/step
      0.027710387500278218
      Class 1:
      10668/10668 [==============================] - 0s 11us/step
      0.015935508061492312
      Class 2:
      33204/33204 [==============================] - 0s 9us/step
      0.8956149861318866
      Class 3:
      274983/274983 [==============================] - 2s 9us/step
      0.035293090845941046


      The output remains somewhat same if I use adasyn instead of SMOTE or change layers and activation functions. Please help me out.
      Thanks in advance.










      share|improve this question









      New contributor




      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      I am new to deep learning. I have imbalanced class data. I used one hot encoding and scaling to preprocess my data. I have used adamoptimizer as optimizer function and sparse categorical crossentropy as my lass function. The model always gives high accuracy on one class with very low accuracy on other classes. Here is my code:



      `



      #separating test data according to classes
      data_test = data_final[data_final.YEAR.isin(2018)]
      data_test_0 = data_test[data_test['DELAY_CLASS']==0]
      test_labels_0 = data_test_0.pop('DELAY_CLASS')
      data_test_1 = data_test[data_test['DELAY_CLASS']==1]
      test_labels_1 = data_test_1.pop('DELAY_CLASS')
      data_test_2 = data_test[data_test['DELAY_CLASS']==2]
      test_labels_2 = data_test_2.pop('DELAY_CLASS')
      data_test_3 = data_test[data_test['DELAY_CLASS']==3]
      test_labels_3 = data_test_3.pop('DELAY_CLASS')


      #Extracting continuous columns from training data
      data_train = data_train[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      #Extracting continuous columns from testing data
      data_test = data_test[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]



      print("reached here")

      #SMOTE
      sm = SMOTE(random_state=2)
      ad = ADASYN(random_state=2)
      data_train, train_labels = sm.fit_sample(data_train, train_labels)

      data_train = pd.DataFrame(data_train)
      data_train = data_train.rename(columns = 0:'MONTH',1:'DAY_OF_MONTH',2:'DAY_OF_WEEK',3:'Dep_Hour',
      4:'Arr_Hour', 5:'CRS_ELAPSED_TIME', 6:'DISTANCE',
      7:'traffic',8:'O_SurfaceTemperatureFahrenheit',9:'O_CloudCoveragePercent',
      10:'O_WindSpeedMph',11:'O_PrecipitationPreviousHourInches',12:'O_SnowfallInches',
      13:'D_SurfaceTemperatureFahrenheit',14:'D_CloudCoveragePercent',15:'D_WindSpeedMph',
      16:'D_PrecipitationPreviousHourInches',17:'D_SnowfallInches',18:'Bird_Strike')

      #taking only continuous columns
      cols = ['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']

      #scaling
      train_mean = data_train[cols].mean(axis=0)
      train_std = data_train[cols].std(axis=0)
      data_train[cols] = (data_train[cols] - train_mean) / train_std
      data_test[cols] = (data_test[cols] - train_mean) / train_std
      rain_labels = pd.Series(train_labels)

      #taking continuous columns from test separated data
      data_test_0 = data_test_0[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_1 = data_test_1[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK','Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit','D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_2 = data_test_2[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      data_test_3 = data_test_3[['MONTH','DAY_OF_MONTH','DAY_OF_WEEK',
      'Dep_Hour','Arr_Hour','CRS_ELAPSED_TIME','DISTANCE','traffic','O_SurfaceTemperatureFahrenheit','O_CloudCoveragePercent','O_WindSpeedMph','O_PrecipitationPreviousHourInches','O_SnowfallInches','D_SurfaceTemperatureFahrenheit',
      'D_CloudCoveragePercent','D_WindSpeedMph','D_PrecipitationPreviousHourInches','D_SnowfallInches','Bird_Strike']]

      #my model
      def build_model():
      model = keras.Sequential([
      layers.Dense(100, activation = 'sigmoid', input_shape=[len(data_train.keys())]),
      #layers.Dropout(0.5),
      layers.Dense(50, activation = 'softplus'),
      #layers.Dropout(0.3),
      layers.Dense(25, activation = 'sigmoid'),
      #layers.Dropout(0.2),
      layers.Dense(4, activation = 'softmax')
      ])

      model.compile(loss='sparse_categorical_crossentropy',#with binary crossentropy use sigmoid and 1 output neuron
      optimizer= tf.train.AdamOptimizer(0.001),
      metrics=['accuracy'])
      return model

      model = build_model()
      model.fit(data_train, train_labels, epochs=5, batch_size=128)



      test_loss, test_acc = model.evaluate(data_test_0, test_labels_0)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_1, test_labels_1)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_2, test_labels_2)
      print(test_acc)
      test_loss, test_acc = model.evaluate(data_test_3, test_labels_3)
      print(test_acc)


      `



      The training data is flights data of 2016 and 2017 and testing data is of 2018. I have separated classes from testing data to see the class wise accuracy of testing data.



      The output is:



      Epoch 1/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.3231 - acc: 0.3466
      Epoch 2/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2799 - acc: 0.3821
      Epoch 3/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2634 - acc: 0.3939
      Epoch 4/5
      1990363/1990363 [==============================] - 17s 8us/step - loss: 1.2519 - acc: 0.4013
      Epoch 5/5
      1990363/1990363 [==============================] - 16s 8us/step - loss: 1.2445 - acc: 0.4068

      Class 0:
      44929/44929 [==============================] - 1s 12us/step
      0.027710387500278218
      Class 1:
      10668/10668 [==============================] - 0s 11us/step
      0.015935508061492312
      Class 2:
      33204/33204 [==============================] - 0s 9us/step
      0.8956149861318866
      Class 3:
      274983/274983 [==============================] - 2s 9us/step
      0.035293090845941046


      The output remains somewhat same if I use adasyn instead of SMOTE or change layers and activation functions. Please help me out.
      Thanks in advance.







      deep-learning multiclass-classification mlp smote imbalanced-learn






      share|improve this question









      New contributor




      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|improve this question









      New contributor




      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|improve this question




      share|improve this question








      edited 30 mins ago







      Bhupesh_decoder













      New contributor




      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 42 mins ago









      Bhupesh_decoderBhupesh_decoder

      11




      11




      New contributor




      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      Bhupesh_decoder is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.




















          0






          active

          oldest

          votes












          Your Answer








          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "557"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );






          Bhupesh_decoder is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49454%2fdeep-learningmlp-on-multiclass-classification-model-learns-only-one-class%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          0






          active

          oldest

          votes








          0






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          Bhupesh_decoder is a new contributor. Be nice, and check out our Code of Conduct.









          draft saved

          draft discarded


















          Bhupesh_decoder is a new contributor. Be nice, and check out our Code of Conduct.












          Bhupesh_decoder is a new contributor. Be nice, and check out our Code of Conduct.











          Bhupesh_decoder is a new contributor. Be nice, and check out our Code of Conduct.














          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49454%2fdeep-learningmlp-on-multiclass-classification-model-learns-only-one-class%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Францішак Багушэвіч Змест Сям'я | Біяграфія | Творчасць | Мова Багушэвіча | Ацэнкі дзейнасці | Цікавыя факты | Спадчына | Выбраная бібліяграфія | Ушанаванне памяці | У філатэліі | Зноскі | Літаратура | Спасылкі | НавігацыяЛяхоўскі У. Рупіўся дзеля Бога і людзей: Жыццёвы шлях Лявона Вітан-Дубейкаўскага // Вольскі і Памідораў з песняй пра немца Адвакат, паэт, народны заступнік Ашмянскі веснікВ Минске появится площадь Богушевича и улица Сырокомли, Белорусская деловая газета, 19 июля 2001 г.Айцец беларускай нацыянальнай ідэі паўстаў у бронзе Сяргей Аляксандравіч Адашкевіч (1918, Мінск). 80-я гады. Бюст «Францішак Багушэвіч».Яўген Мікалаевіч Ціхановіч. «Партрэт Францішка Багушэвіча»Мікола Мікалаевіч Купава. «Партрэт зачынальніка новай беларускай літаратуры Францішка Багушэвіча»Уладзімір Іванавіч Мелехаў. На помніку «Змагарам за родную мову» Барэльеф «Францішак Багушэвіч»Памяць пра Багушэвіча на Віленшчыне Страчаная сталіца. Беларускія шыльды на вуліцах Вільні«Krynica». Ideologia i przywódcy białoruskiego katolicyzmuФранцішак БагушэвічТворы на knihi.comТворы Францішка Багушэвіча на bellib.byСодаль Уладзімір. Францішак Багушэвіч на Лідчыне;Луцкевіч Антон. Жыцьцё і творчасьць Фр. Багушэвіча ў успамінах ягоных сучасьнікаў // Запісы Беларускага Навуковага таварыства. Вільня, 1938. Сшытак 1. С. 16-34.Большая российская1188761710000 0000 5537 633Xn9209310021619551927869394п

          Беларусь Змест Назва Гісторыя Геаграфія Сімволіка Дзяржаўны лад Палітычныя партыі Міжнароднае становішча і знешняя палітыка Адміністрацыйны падзел Насельніцтва Эканоміка Культура і грамадства Сацыяльная сфера Узброеныя сілы Заўвагі Літаратура Спасылкі НавігацыяHGЯOiТоп-2011 г. (па версіі ej.by)Топ-2013 г. (па версіі ej.by)Топ-2016 г. (па версіі ej.by)Топ-2017 г. (па версіі ej.by)Нацыянальны статыстычны камітэт Рэспублікі БеларусьШчыльнасць насельніцтва па краінахhttp://naviny.by/rubrics/society/2011/09/16/ic_articles_116_175144/А. Калечыц, У. Ксяндзоў. Спробы засялення краю неандэртальскім чалавекам.І ў Менску былі мамантыА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіГ. Штыхаў. Балты і славяне ў VI—VIII стст.М. Клімаў. Полацкае княства ў IX—XI стст.Г. Штыхаў, В. Ляўко. Палітычная гісторыя Полацкай зямліГ. Штыхаў. Дзяржаўны лад у землях-княствахГ. Штыхаў. Дзяржаўны лад у землях-княствахБеларускія землі ў складзе Вялікага Княства ЛітоўскагаЛюблінская унія 1569 г."The Early Stages of Independence"Zapomniane prawdy25 гадоў таму было аб'яўлена, што Язэп Пілсудскі — беларус (фота)Наша вадаДакументы ЧАЭС: Забруджванне тэрыторыі Беларусі « ЧАЭС Зона адчужэнняСведения о политических партиях, зарегистрированных в Республике Беларусь // Министерство юстиции Республики БеларусьСтатыстычны бюлетэнь „Полаўзроставая структура насельніцтва Рэспублікі Беларусь на 1 студзеня 2012 года і сярэднегадовая колькасць насельніцтва за 2011 год“Индекс человеческого развития Беларуси — не было бы нижеБеларусь занимает первое место в СНГ по индексу развития с учетом гендерного факцёраНацыянальны статыстычны камітэт Рэспублікі БеларусьКанстытуцыя РБ. Артыкул 17Трансфармацыйныя задачы БеларусіВыйсце з крызісу — далейшае рэфармаванне Беларускі рубель — сусветны лідар па дэвальвацыяхПра змену коштаў у кастрычніку 2011 г.Бядней за беларусаў у СНД толькі таджыкіСярэдні заробак у верасні дасягнуў 2,26 мільёна рублёўЭканомікаГаласуем за ТОП-100 беларускай прозыСучасныя беларускія мастакіАрхитектура Беларуси BELARUS.BYА. Каханоўскі. Культура Беларусі ўсярэдзіне XVII—XVIII ст.Анталогія беларускай народнай песні, гуказапісы спеваўБеларускія Музычныя IнструментыБеларускі рок, які мы страцілі. Топ-10 гуртоў«Мясцовы час» — нязгаслая легенда беларускай рок-музыкіСЯРГЕЙ БУДКІН. МЫ НЯ ЗНАЕМ СВАЁЙ МУЗЫКІМ. А. Каладзінскі. НАРОДНЫ ТЭАТРМагнацкія культурныя цэнтрыПублічная дыскусія «Беларуская новая пьеса: без беларускай мовы ці беларуская?»Беларускія драматургі па-ранейшаму лепш ставяцца за мяжой, чым на радзіме«Працэс незалежнага кіно пайшоў, і дзяржаву турбуе яго непадкантрольнасць»Беларускія філосафы ў пошуках прасторыВсе идём в библиотекуАрхіваванаАб Нацыянальнай праграме даследавання і выкарыстання касмічнай прасторы ў мірных мэтах на 2008—2012 гадыУ космас — разам.У суседнім з Барысаўскім раёне пабудуюць Камандна-вымяральны пунктСвяты і абрады беларусаў«Мірныя бульбашы з малой краіны» — 5 непраўдзівых стэрэатыпаў пра БеларусьМ. Раманюк. Беларускае народнае адзеннеУ Беларусі скарачаецца колькасць злачынстваўЛукашэнка незадаволены мінскімі ўладамі Крадзяжы складаюць у Мінску каля 70% злачынстваў Узровень злачыннасці ў Мінскай вобласці — адзін з самых высокіх у краіне Генпракуратура аналізуе стан са злачыннасцю ў Беларусі па каэфіцыенце злачыннасці У Беларусі стабілізавалася крымінагеннае становішча, лічыць генпракурорЗамежнікі сталі здзяйсняць у Беларусі больш злачынстваўМУС Беларусі турбуе рост рэцыдыўнай злачыннасціЯ з ЖЭСа. Дазволіце вас абкрасці! Рэйтынг усіх службаў і падраздзяленняў ГУУС Мінгарвыканкама вырасАб КДБ РБГісторыя Аператыўна-аналітычнага цэнтра РБГісторыя ДКФРТаможняagentura.ruБеларусьBelarus.by — Афіцыйны сайт Рэспублікі БеларусьСайт урада БеларусіRadzima.org — Збор архітэктурных помнікаў, гісторыя Беларусі«Глобус Беларуси»Гербы и флаги БеларусиАсаблівасці каменнага веку на БеларусіА. Калечыц, У. Ксяндзоў. Старажытны каменны век (палеаліт). Першапачатковае засяленне тэрыторыіУ. Ксяндзоў. Сярэдні каменны век (мезаліт). Засяленне краю плямёнамі паляўнічых, рыбакоў і збіральнікаўА. Калечыц, М. Чарняўскі. Плямёны на тэрыторыі Беларусі ў новым каменным веку (неаліце)А. Калечыц, У. Ксяндзоў, М. Чарняўскі. Гаспадарчыя заняткі ў каменным векуЭ. Зайкоўскі. Духоўная культура ў каменным векуАсаблівасці бронзавага веку на БеларусіФарміраванне супольнасцей ранняга перыяду бронзавага векуФотографии БеларусиРоля беларускіх зямель ва ўтварэнні і ўмацаванні ВКЛВ. Фадзеева. З гісторыі развіцця беларускай народнай вышыўкіDMOZGran catalanaБольшая российскаяBritannica (анлайн)Швейцарскі гістарычны15325917611952699xDA123282154079143-90000 0001 2171 2080n9112870100577502ge128882171858027501086026362074122714179пппппп

          ValueError: Expected n_neighbors <= n_samples, but n_samples = 1, n_neighbors = 6 (SMOTE) The 2019 Stack Overflow Developer Survey Results Are InCan SMOTE be applied over sequence of words (sentences)?ValueError when doing validation with random forestsSMOTE and multi class oversamplingLogic behind SMOTE-NC?ValueError: Error when checking target: expected dense_1 to have shape (7,) but got array with shape (1,)SmoteBoost: Should SMOTE be ran individually for each iteration/tree in the boosting?solving multi-class imbalance classification using smote and OSSUsing SMOTE for Synthetic Data generation to improve performance on unbalanced dataproblem of entry format for a simple model in KerasSVM SMOTE fit_resample() function runs forever with no result