参数必须是1个整数的元组。接收的OR TypeError:int()参数必须是字符串、类似字节的对象或数字,而不是列表 [英] The argument must be a tuple of 1 integers. Received OR TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'

查看:25
本文介绍了参数必须是1个整数的元组。接收的OR TypeError:int()参数必须是字符串、类似字节的对象或数字,而不是列表的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我尝试使用FIT_GENERATOR和TALOS(用于超参数调优)。早些时候,当我使用FIT方法时,我得到了内存错误,所以当我在这里搜索时,人们说我应该尝试使用FIT_GENERATOR。前面我给了太多的参数,所以即使使用FIT_GENERATOR,我也得到了内存错误,现在我减少了参数的数量,现在我得到了不同的错误。请查找下面的代码和错误。

代码:

def yield_arrays_train(array_x_train_feat1=xtrain_np_img1,array_x_train_feat2=xtrain_np_img2,array_y_train=y_train_numpy,batch_size=6):

    while 1:
        for i in range(14886):            
            X_feat1_train = (array_x_train_feat1[i:i+batch_size,:,:].astype(np.float16))
            X_feat2_train = (array_x_train_feat2[i:i+batch_size,:,:].astype(np.float16))
            Y_train = (array_y_train[i:i+batch_size].astype(np.float16))
            yield ([(np.array(X_feat1_train)),(np.array(X_feat2_train))],(np.array(Y_train)))

def yield_arrays_val(array_x_test_feat1,array_x_test_feat2,array_y_test,batch_size):

    while 1:
        for i in range(60):            
            X_feat1_test = (array_x_test_feat1[i:i+batch_size,:,:].astype(np.float16))
            X_feat2_test = (array_x_test_feat2[i:i+batch_size,:,:].astype(np.float16)) 
            Y_test = (array_y_test[i:i+batch_size].astype(np.float16))
            yield ([(np.array(X_feat1_test)),(np.array(X_feat2_test))],(np.array(Y_test)))

def siamese (array_x_train_feat1=xtrain_np_img1,array_x_train_feat2=xtrain_np_img2,array_y_train=y_train_numpy,array_x_test_feat1=xtest_np_img1,array_x_test_feat2=xtest_np_img2,array_y_test=y_test_numpy):


    W_init = tf.keras.initializers.he_normal(seed=100)
    b_init = tf.keras.initializers.he_normal(seed=50)

    input_shape = (24,939)
    left_input = Input(input_shape)
    right_input = Input(input_shape)

    encoder = Sequential()
    encoder.add(Conv1D(filters=8,kernel_size=6, padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))
    encoder.add(BatchNormalization())
    encoder.add(Dropout(.1))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=6,kernel_size=4, padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout(.1))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=4,kernel_size=4, padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout(.1))
    encoder.add(MaxPool1D())
    encoder.add(Flatten())
    encoder.add(Dense(10,activation='relu'))
    encoder.add(Dropout(.1))
    encoded_l = encoder(left_input)
    encoded_r = encoder(right_input)
    distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([encoded_l, encoded_r])
    adam = optimizers.Adam(lr=.1, beta_1=0.1, beta_2=0.999,decay=.1, amsgrad=False)
    earlyStopping = EarlyStopping(monitor='loss',min_delta=0,patience=3,verbose=1,restore_best_weights=False)
    callback_early_stop_reduceLROnPlateau=[earlyStopping]
    model = Model([left_input, right_input], distance)
    model.compile(loss=contrastive_loss, optimizer=adam,metrics=[accuracy])
    model.summary()
    #history = model.fit([(x_train[:,:,:,0]).astype(np.float32),(x_train[:,:,:,1]).astype(np.float32)],y_train, validation_data=([(x_val[:,:,:,0]).astype(np.float32),(x_val[:,:,:,1]).astype(np.float32)], y_val) ,batch_size=params['batch_size'],epochs=params['epochs'],callbacks=callback_early_stop_reduceLROnPlateau)
  
    history=model.fit_generator(generator=yield_arrays_train(array_x_train_feat1,array_x_train_feat2,array_y_train,6),validation_data=yield_arrays_val(array_x_test_feat1,array_x_test_feat2,array_y_test,6),steps_per_epoch=2481,epochs=5, validation_steps=1000,verbose=1,callbacks=callback_early_stop_reduceLROnPlateau,use_multiprocessing=False,workers=0)

    return history,model

siamese (xtrain_np_img1,xtrain_np_img2,y_train_numpy,xtest_np_img1,xtest_np_img2,y_test_numpy)

输出:

WARNING:tensorflow:From C:UsersDELLAppDataRoamingPythonPython37site-packageskerasackend	ensorflow_backend.py:4070: The name tf.nn.max_pool is deprecated. Please use tf.nn.max_pool2d instead.

Model: "model_1"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_1 (InputLayer)            (None, 24, 939)      0                                            
__________________________________________________________________________________________________
input_2 (InputLayer)            (None, 24, 939)      0                                            
__________________________________________________________________________________________________
sequential_1 (Sequential)       (None, 10)           45580       input_1[0][0]                    
                                                                 input_2[0][0]                    
__________________________________________________________________________________________________
lambda_1 (Lambda)               (None, 1)            0           sequential_1[1][0]               
                                                                 sequential_1[2][0]               
==================================================================================================
Total params: 45,580
Trainable params: 45,544
Non-trainable params: 36
__________________________________________________________________________________________________
WARNING:tensorflow:From C:UsersDELLanaconda3envsMyEnvlibsite-packages	ensorflowpythonopsmath_grad.py:1250: add_dispatch_support.<locals>.wrapper (from tensorflow.python.ops.array_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.where in 2.0, which has the same broadcast rule as np.where
WARNING:tensorflow:From C:UsersDELLAppDataRoamingPythonPython37site-packageskerasackend	ensorflow_backend.py:422: The name tf.global_variables is deprecated. Please use tf.compat.v1.global_variables instead.

Epoch 1/5
2481/2481 [==============================] - 30s 12ms/step - loss: 0.0024 - accuracy: 0.9992 - val_loss: 0.8333 - val_accuracy: 0.1667
Epoch 2/5
2481/2481 [==============================] - 28s 11ms/step - loss: 6.9194e-05 - accuracy: 0.9999 - val_loss: 0.8333 - val_accuracy: 0.1667
Epoch 3/5
2481/2481 [==============================] - 28s 11ms/step - loss: nan - accuracy: 0.9993 - val_loss: nan - val_accuracy: 0.8333
Epoch 4/5
2481/2481 [==============================] - 28s 11ms/step - loss: nan - accuracy: 1.0000 - val_loss: nan - val_accuracy: 0.8333
Epoch 5/5
2481/2481 [==============================] - 28s 11ms/step - loss: nan - accuracy: 1.0000 - val_loss: nan - val_accuracy: 0.8333oss: nan - acc
Epoch 00005: early stopping
(<keras.callbacks.callbacks.History at 0x26cf45c6ec8>,
 <keras.engine.training.Model at 0x26cf3e364c8>)

因此,当我使用一组硬编码的参数时,上面的代码在没有Talos的情况下也可以工作

如果我使用

p = {
       'filter1':[4,6,8],
    'kernel_size1':[2,4,6],
    'filter3' :  [2,4,6,],
    'kernel_size3' :  [4,6,8],
    'decay' :[.1,0.01,.001],
    'droprate1' :[.1,.2,.3],
    'filter2':[4,6,8],
    'kernel_size2':[8,12],
    'droprate4' :  [.1,.2],
    'droprate2' :[.1,.2],
    'unit1': [10,36,64],
    'droprate3': [.1,.2],
    'lr' :[.1,0.01,.001]
}

def siamese (array_x_train_feat1=xtrain_np_img1,array_x_train_feat2=xtrain_np_img2,array_y_train=y_train_numpy,array_x_test_feat1=xtest_np_img1,array_x_test_feat2=xtest_np_img2,array_y_test=y_test_numpy,params=p):


    W_init = tf.keras.initializers.he_normal(seed=100)
    b_init = tf.keras.initializers.he_normal(seed=50)

    input_shape = (24,939)
    left_input = Input(input_shape)
    right_input = Input(input_shape)

    encoder = Sequential()
    encoder.add(Conv1D(filters=(params['filter1']),kernel_size=(params['kernel_size1']), padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate1'])))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=(params['filter2']),kernel_size=(params['kernel_size2']), padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate2'])))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=(params['filter3']),kernel_size=(params['kernel_size3']), padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate3'])))
    encoder.add(MaxPool1D())
    encoder.add(Flatten())
    encoder.add(Dense((params['unit1']),activation='relu'))
    encoder.add(Dropout((params['droprate4'])))

    encoded_l = encoder(left_input)
    encoded_r = encoder(right_input)
    distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([encoded_l, encoded_r])
    adam = optimizers.Adam(lr=params['lr'], beta_1=0.1, beta_2=0.999,decay=.1, amsgrad=False)
    earlyStopping = EarlyStopping(monitor='loss',min_delta=0,patience=3,verbose=1,restore_best_weights=False)
    callback_early_stop_reduceLROnPlateau=[earlyStopping]
    model = Model([left_input, right_input], distance)
    model.compile(loss=contrastive_loss, optimizer=adam,metrics=[accuracy])
    model.summary()
    history=model.fit_generator(generator=yield_arrays_train(array_x_train_feat1,array_x_train_feat2,array_y_train,6),validation_data=yield_arrays_val(array_x_test_feat1,array_x_test_feat2,array_y_test,6),steps_per_epoch=2481,epochs=5, validation_steps=1000,verbose=1,callbacks=callback_early_stop_reduceLROnPlateau,use_multiprocessing=False,workers=0)
    return history,model

t=ta.Scan(x=[xtrain_np_img1.astype(np.float16),xtrain_np_img2.astype(np.float16)],y=y_train_numpy,x_val=[xtest_np_img1,xtest_np_img2],y_val=y_test_numpy,model=siamese,params=p,experiment_name='exp_1')

错误

----> 3 t=ta.Scan(x=[xtrain_np_img1.astype(np.float16),xtrain_np_img2.astype(np.float16)],y=y_train_numpy,x_val=[xtest_np_img1,xtest_np_img2],y_val=y_test_numpy,model=siamese,params=p,experiment_name='exp_1')

~anaconda3envsMyEnvlibsite-packages	alosscanScan.py in __init__(self, x, y, params, model, experiment_name, x_val, y_val, val_split, random_method, seed, performance_target, fraction_limit, round_limit, time_limit, boolean_limit, reduction_method, reduction_interval, reduction_window, reduction_threshold, reduction_metric, minimize_loss, disable_progress_bar, print_params, clear_session, save_weights)
    194         # start runtime
    195         from .scan_run import scan_run
--> 196         scan_run(self)

~anaconda3envsMyEnvlibsite-packages	alosscanscan_run.py in scan_run(self)
     24         # otherwise proceed with next permutation
     25         from .scan_round import scan_round
---> 26         self = scan_round(self)
     27         self.pbar.update(1)
     28 

~anaconda3envsMyEnvlibsite-packages	alosscanscan_round.py in scan_round(self)
     17     # fit the model
     18     from ..model.ingest_model import ingest_model
---> 19     self.model_history, self.round_model = ingest_model(self)
     20     self.round_history.append(self.model_history.history)
     21 

~anaconda3envsMyEnvlibsite-packages	alosmodelingest_model.py in ingest_model(self)
      8                       self.x_val,
      9                       self.y_val,
---> 10                       self.round_params)

<ipython-input-27-fe409e1ff506> in siamese(array_x_train_feat1, array_x_train_feat2, array_y_train, array_x_test_feat1, array_x_test_feat2, array_y_test, params)
     11 
     12     encoder = Sequential()
---> 13     encoder.add(Conv1D(filters=(params['filter1']),kernel_size=(params['kernel_size1']), padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))
     14     encoder.add(BatchNormalization())
     15     encoder.add(Dropout((params['droprate1'])))

~AppDataRoamingPythonPython37site-packageskeraslegacyinterfaces.py in wrapper(*args, **kwargs)
     89                 warnings.warn('Update your `' + object_name + '` call to the ' +
     90                               'Keras 2 API: ' + signature, stacklevel=2)
---> 91             return func(*args, **kwargs)
     92         wrapper._original_function = func
     93         return wrapper

~AppDataRoamingPythonPython37site-packageskeraslayersconvolutional.py in __init__(self, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs)
    351             kernel_constraint=kernel_constraint,
    352             bias_constraint=bias_constraint,
--> 353             **kwargs)
    354 
    355     def get_config(self):

~AppDataRoamingPythonPython37site-packageskeraslayersconvolutional.py in __init__(self, rank, filters, kernel_size, strides, padding, data_format, dilation_rate, activation, use_bias, kernel_initializer, bias_initializer, kernel_regularizer, bias_regularizer, activity_regularizer, kernel_constraint, bias_constraint, **kwargs)
    107         self.filters = filters
    108         self.kernel_size = conv_utils.normalize_tuple(kernel_size, rank,
--> 109                                                       'kernel_size')
    110         self.strides = conv_utils.normalize_tuple(strides, rank, 'strides')
    111         self.padding = conv_utils.normalize_padding(padding)

~AppDataRoamingPythonPython37site-packageskerasutilsconv_utils.py in normalize_tuple(value, n, name)
     37         if len(value_tuple) != n:
     38             raise ValueError('The `' + name + '` argument must be a tuple of ' +
---> 39                              str(n) + ' integers. Received: ' + str(value))
     40         for single_value in value_tuple:
     41             try:

ValueError: The `kernel_size` argument must be a tuple of 1 integers. Received: [2, 4, 6]

如果我只给出一组超参数来检查,我得到的是不同的错误还是相同的错误

p = {
    'filter1':[6],
    'kernel_size1':[4],
    'filter3' :  [4],
    'kernel_size3' :  [6],
    'decay' :[.1],
    'droprate1' :[.1],
    'filter2':[4],
    'kernel_size2':[8],
    'droprate4' :  [.1],
    'droprate2' :[.1],
    'unit1': [10],
    'droprate3': [.1],
    'lr' :[.1]
}
def siamese (array_x_train_feat1=xtrain_np_img1,array_x_train_feat2=xtrain_np_img2,array_y_train=y_train_numpy,array_x_test_feat1=xtest_np_img1,array_x_test_feat2=xtest_np_img2,array_y_test=y_test_numpy,params=p):


    W_init = tf.keras.initializers.he_normal(seed=100)
    b_init = tf.keras.initializers.he_normal(seed=50)

    input_shape = (24,939)
    left_input = Input(input_shape)
    right_input = Input(input_shape)

    encoder = Sequential()
    encoder.add(Conv1D(filters=(params['filter1']),kernel_size=(params['kernel_size1']), padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate1'])))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=(params['filter2']),kernel_size=(params['kernel_size2']), padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate2'])))
    encoder.add(MaxPool1D())
    encoder.add(Conv1D(filters=(params['filter3']),kernel_size=(params['kernel_size3']), padding='same', activation='relu'))
    encoder.add(BatchNormalization())
    encoder.add(Dropout((params['droprate3'])))
    encoder.add(MaxPool1D())
    encoder.add(Flatten())
    encoder.add(Dense((params['unit1']),activation='relu'))
    encoder.add(Dropout((params['droprate4'])))

    encoded_l = encoder(left_input)
    encoded_r = encoder(right_input)
    distance = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([encoded_l, encoded_r])
    adam = optimizers.Adam(lr=params['lr'], beta_1=0.1, beta_2=0.999,decay=.1, amsgrad=False)
    earlyStopping = EarlyStopping(monitor='loss',min_delta=0,patience=3,verbose=1,restore_best_weights=False)
    callback_early_stop_reduceLROnPlateau=[earlyStopping]
    model = Model([left_input, right_input], distance)
    model.compile(loss=contrastive_loss, optimizer=adam,metrics=[accuracy])
    model.summary()
    history=model.fit_generator(generator=yield_arrays_train(array_x_train_feat1,array_x_train_feat2,array_y_train,6),validation_data=yield_arrays_val(array_x_test_feat1,array_x_test_feat2,array_y_test,6),steps_per_epoch=2481,epochs=5, validation_steps=1000,verbose=1,callbacks=callback_early_stop_reduceLROnPlateau,use_multiprocessing=False,workers=0)
    return history,model
t=ta.Scan(x=[xtrain_np_img1.astype(np.float16),xtrain_np_img2.astype(np.float16)],y=y_train_numpy,x_val=[xtest_np_img1,xtest_np_img2],y_val=y_test_numpy,model=siamese,params=p,experiment_name='exp_1')

错误:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-31-4c923301ede6> in <module>
      1 #t=ta.Scan(x=xtrain_np_img1_img2,y=y_train_numpy,x_val=xtest_np_img1_img2,y_val=y_test_numpy,model=siamese,params=p,experiment_name='exp_1')
      2 
----> 3 t=ta.Scan(x=[xtrain_np_img1.astype(np.float16),xtrain_np_img2.astype(np.float16)],y=y_train_numpy,x_val=[xtest_np_img1,xtest_np_img2],y_val=y_test_numpy,model=siamese,params=p,experiment_name='exp_1')

~anaconda3envsMyEnvlibsite-packages	alosscanScan.py in __init__(self, x, y, params, model, experiment_name, x_val, y_val, val_split, random_method, seed, performance_target, fraction_limit, round_limit, time_limit, boolean_limit, reduction_method, reduction_interval, reduction_window, reduction_threshold, reduction_metric, minimize_loss, disable_progress_bar, print_params, clear_session, save_weights)
    194         # start runtime
    195         from .scan_run import scan_run
--> 196         scan_run(self)

~anaconda3envsMyEnvlibsite-packages	alosscanscan_run.py in scan_run(self)
     24         # otherwise proceed with next permutation
     25         from .scan_round import scan_round
---> 26         self = scan_round(self)
     27         self.pbar.update(1)
     28 

~anaconda3envsMyEnvlibsite-packages	alosscanscan_round.py in scan_round(self)
     17     # fit the model
     18     from ..model.ingest_model import ingest_model
---> 19     self.model_history, self.round_model = ingest_model(self)
     20     self.round_history.append(self.model_history.history)
     21 

~anaconda3envsMyEnvlibsite-packages	alosmodelingest_model.py in ingest_model(self)
      8                       self.x_val,
      9                       self.y_val,
---> 10                       self.round_params)

<ipython-input-30-fe409e1ff506> in siamese(array_x_train_feat1, array_x_train_feat2, array_y_train, array_x_test_feat1, array_x_test_feat2, array_y_test, params)
     11 
     12     encoder = Sequential()
---> 13     encoder.add(Conv1D(filters=(params['filter1']),kernel_size=(params['kernel_size1']), padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))
     14     encoder.add(BatchNormalization())
     15     encoder.add(Dropout((params['droprate1'])))

~AppDataRoamingPythonPython37site-packageskerasenginesequential.py in add(self, layer)
    164                     # and create the node connecting the current layer
    165                     # to the input layer we just created.
--> 166                     layer(x)
    167                     set_inputs = True
    168             else:

~AppDataRoamingPythonPython37site-packageskerasenginease_layer.py in __call__(self, inputs, **kwargs)
    461                                          'You can build it manually via: '
    462                                          '`layer.build(batch_input_shape)`')
--> 463                 self.build(unpack_singleton(input_shapes))
    464                 self.built = True
    465 

~AppDataRoamingPythonPython37site-packageskeraslayersconvolutional.py in build(self, input_shape)
    139                                       name='kernel',
    140                                       regularizer=self.kernel_regularizer,
--> 141                                       constraint=self.kernel_constraint)
    142         if self.use_bias:
    143             self.bias = self.add_weight(shape=(self.filters,),

~AppDataRoamingPythonPython37site-packageskerasenginease_layer.py in add_weight(self, name, shape, dtype, initializer, regularizer, trainable, constraint)
    277         if dtype is None:
    278             dtype = self.dtype
--> 279         weight = K.variable(initializer(shape, dtype=dtype),
    280                             dtype=dtype,
    281                             name=name,

~anaconda3envsMyEnvlibsite-packages	ensorflowpythonopsinit_ops.py in __call__(self, shape, dtype, partition_info)
    513     if partition_info is not None:
    514       scale_shape = partition_info.full_shape
--> 515     fan_in, fan_out = _compute_fans(scale_shape)
    516     if self.mode == "fan_in":
    517       scale /= max(1., fan_in)

~anaconda3envsMyEnvlibsite-packages	ensorflowpythonopsinit_ops.py in _compute_fans(shape)
   1445     fan_in = shape[-2] * receptive_field_size
   1446     fan_out = shape[-1] * receptive_field_size
-> 1447   return int(fan_in), int(fan_out)
   1448 
   1449 

TypeError: int() argument must be a string, a bytes-like object or a number, not 'list'

在我看来,所有这些似乎都是python错误,与Talos(超参数优化工具)无关

我的问题是

  1. 我在代码中犯了什么错误?
  2. 我的代码编写逻辑是否有误?
我有另一个问题,但我会开始另一个线程,以了解我正在解决的通过mfcc+mfcc_del+mfcc_del_del并使用暹罗网络识别发言者的问题,它会起作用吗? 可以通过此链接访问完整代码,但我尚未将数据文件上传到Google Drive,因此无法工作,但用户可能会看到完整代码的大部分

https://colab.research.google.com/drive/1Twssq_MiQ4RxMZ69A595SFIgs4JhG18u?usp=sharing # 编辑:我只是检查了一个链接https://www.kaggle.com/sohaibanwaar1203/talos-hyper-parameter-optimization 其中的代码为

params = {'lr': (0.1, 0.01,1 ),
     'epochs': [10,5,15],
     'dropout': (0, 0.40, 0.8),
     'optimizer': ["Adam","Adagrad","sgd"],
     'loss': ["binary_crossentropy","mean_squared_error","mean_absolute_error"],
     'last_activation': ["softmax","sigmoid"],
     'activation' :["relu","selu","linear"],
     'clipnorm':(0.0,0.5,1),
     'decay':(1e-6,1e-4,1e-2),
     'momentum':(0.9,0.5,0.2),
     'l1': (0.01,0.001,0.0001),
     'l2': (0.01,0.001,0.0001),
     'No_of_CONV_and_Maxpool_layers':[1,2],
     'No_of_Dense_Layers': [2,3,4],
     'No_of_Units_in_dense_layers':[64,32],
     'Kernal_Size':[(3,3),(5,5)],
     'Conv2d_filters':[60,40,80,120],
     'pool_size':[(3,3),(5,5)],
     'padding':["valid","same"]
    }
    lr = params['lr']
    epochs=params['epochs']
    dropout_rate=params['dropout']
    optimizer=params['optimizer']
    loss=params['loss']
    last_activation=params['last_activation']
    activation=params['activation']
    clipnorm=params['clipnorm']
    decay=params['decay']
    momentum=params['momentum']
    l1=params['l1']
    l2=params['l2']
    No_of_CONV_and_Maxpool_layers=params['No_of_CONV_and_Maxpool_layers']
    No_of_Dense_Layers =params['No_of_Dense_Layers']
    No_of_Units_in_dense_layers=params['No_of_Units_in_dense_layers']
    Kernal_Size=params['Kernal_Size']
    Conv2d_filters=params['Conv2d_filters']
    pool_size_p=params['pool_size']
    padding_p=params['padding']

输出

print (type((params['epochs'])))
<class 'list'>

所以我看到,即使在这个博客中,输出也是列表,所以我不明白它为什么会给出错误。

推荐答案

我遇到了相同的错误,我通过将值强制转换为INT来解决它。

encoder.add(Conv1D(filters=(params['filter1']),kernel_size=**int(**(params['kernel_size1'])**)**, padding='same', activation='relu',input_shape=input_shape,kernel_initializer=W_init, bias_initializer=b_init))

这解决了我的问题。

这篇关于参数必须是1个整数的元组。接收的OR TypeError:int()参数必须是字符串、类似字节的对象或数字,而不是列表的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

查看全文
相关文章
登录 关闭
扫码关注1秒登录
发送“验证码”获取 | 15天全站免登陆