如何将每个Y的X数据的多个值提供给TensorFlow NN

By simon at 2018-02-07 • 0人收藏 • 20人看过

我有两套尺寸为36 * 3的numpy阵列数据。两者的每一行 数组对应于相同的Y值。我需要付费在每一行d为numpy 将数组1转化为回归NN来生成一个值,然后将所有这些值相加。 这需要重复f或numpy数组2.最后的预测 单个Y值,是两个预测​​总和之间的差值。 我不熟e如何将这些x行数据(3个输入节点)的值送入X. 占位符,将每个y值转换为Y占位符,然后即可运行在一个TF 会话。由于下面的整个代码只对应一个有效的数据点, 函数get_dataset(1)给出了th这是如何加载一个数据点 目前。 我想知道其他人会如何解决这个问题。基本上我 do不知道如何格式化x数据。每个Y有多个输入 值。

data = get_dataset(1)  # 36*6 np array corresponding to one Y value

ideal_data = data[:,[0,1,4]] # ideal and displaced data are in these columns (0,1) and (2,3)
ideal_data = ideal_data.tolist() #flatten

displaced_data = data[:,[2,3,4]]
displaced_data = displaced_data.tolist()

y = data[0][5]

y_data = tf.convert_to_tensor(y)

for i in range(36): # get each row i.e. X(i) datapoints
  ideal_data_tf = tf.convert_to_tensor(ideal_data[i])
  displaced_data_tf = tf.convert_to_tensor(displaced_data[i])
我的回归NN是目前定义的作为下面的函数,用一个X. 占位符:
with tf.name_scope("Training_Neural_Network"):
#Training Computation
  def training_multilayer_perceptron(X, weights, biases): #dropout should only be used during training, not during evaluation
    with tf.name_scope("Layer1"):
      layer_1 = tf.add(tf.matmul(X, weights['W1']), biases['b1'])
      layer_1 = tf.nn.relu(layer_1)
      layer_1 = tf.nn.dropout(layer_1,keep_prob)
    with tf.name_scope("Layer2"):
      layer_2 = tf.add(tf.matmul(layer_1, weights['W2']), biases['b2'])
      layer_2 = tf.nn.relu(layer_2)
      layer_2 = tf.nn.dropout(layer_2,keep_prob)
    with tf.name_scope("Layer3"):
      out_layer = tf.add(tf.matmul(layer_2, weights['W3']), biases['b3'])
      return out_layer
P.这是我发布的第一个问题,所以任何纪念nts如何我 可以让我的问题更好,将不胜感激。

1 个回复 | 最后更新于 2018-02-07
2018-02-07   #1

我可能不明白这个问题,但我已经实施了 在我的硕士论文中非常相似的东西,所以我要展示你的代码(与 良好的结构化评论)。

df = read_excel('Train_Data.xlsx')

# convert dataframe into array
data = np.asarray(df, dtype=np.int64)

# train data (x) contains all rows and all columns except the last
x = data[:, :-1]
# label data (y) contains all rows and only the last column 
y = data[:, -1]

# label data is reshaped to fit the right format
y = np.reshape(y, [y.shape[0], 1])

# both datasets are shuffled to simplify split of train and test data
permutation = np.random.permutation(x.shape[0])
x = x[permutation]
y = y[permutation]

# test data ratio is determined
test_size = 0.1

# train data is sliced from list of total data, test data equals the rest
num_test = int(test_size * len(data))
X_train = x[:-num_test]
X_test = x[-num_test:]

# the same applies for the label values
Y_train = y[:-num_test]
Y_test = y[-num_test:]

[...]

# defining placeholder variables for model
x = tf.placeholder("float", [None, 7])
y = tf.placeholder("float", [None, 1])
x的占位符定义采用输入形式d阿塔,所以如果你 有像36 * 6 = 216的功能,对应1个标签,您将设置
x = tf.placeholder("float", [None, 216])
y = tf.placeholder("float", [None, 1])
然后你填补这个地方持有人在你的会议(我已经实施了 批量)
# launch the graph
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # training cycle
    for epoch in range(training_epochs):
        avg_cost = 0.
        total_batch = int(total_len/batch_size)
        # loop over all batches
        for i in range(total_batch-1):
            batch_x = X_train[i*batch_size:(i+1)*batch_size]
            batch_y = Y_train[i*batch_size:(i+1)*batch_size]
            # run optimization (backprop) and cost op (to get loss value)
            _, c, p = sess.run([optimizer, cost, pred], feed_dict={x: batch_x,
                                                                   y: batch_y})

登录后方可回帖

Loading...