머신러닝 221007 #1주차

2022. 10. 7. 15:42스파르타코딩클럽[AI트랙 3기]/머신러닝 강의

1-1

 

 

 

1-8

초간단 Linear Regression 실습 (TensorFlow)

import tensorflow as tf

tf.compat.v1.disable_eager_execution() #tf1버전을 사용하게끔 해주는 것.

x_data = [[1, 1], [2, 2], [3, 3]] #각 값은 대응됨.
y_data = [[10], [20], [30]]

X = tf.compat.v1.placeholder(tf.float32, shape=[None, 2]) #placehlder는 넣어줄 공간. 데이터의 형태를 지정해줘야하는데 소수점을 많이 쓰기 때문에 32비트 float활용
Y = tf.compat.v1.placeholder(tf.float32, shape=[None, 1]) #shape는 모양 ,2 는 두개 들어간거 , None은 배치값

W = tf.Variable(tf.random.normal(shape=(2, 1)), name='W') #w랑 b는 variable 로 지정. 초기화를 해줘야하는데 랜덤으로 해줌(random.normal)
b = tf.Variable(tf.random.normal(shape=(1,)), name='b') #name은 아무거나 지정해도 상관 없음. 이렇게하고 실행을 시키면
hypothesis = tf.matmul(X, W) + b #선형회귀의 가설, matmul은 matrix multifulcation
cost = tf.reduce_mean(tf.square(hypothesis - Y)) #cost finction은 mean squerd error를 사용.(뺀거에서 제곱) > 가설에서 정답값을 뺴고 제곱을 해라.
optimizer = tf.compat.v1.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) #경사하강법을 사용할것인데,러닝레이트는 0.01씩 가고, 코스트를 최소화 하는 방향으로 간다.
with tf.compat.v1.Session() as sess: #세션을 정의해주고 그것을 sess로 부름.
  sess.run(tf.compat.v1.global_variables_initializer()) #텐서플로우에서 모델을 학습시킬때는 세션이라는 개념 필요. 세션은 저장소(텐서플로어에서 사용하는 모든 변수와 그래프를 저장)
#바로 윗줄은 모든 변수를 초기화하라는 명령어, 윗줄은 규칙이라 암기 필요.
  for step in range(50): #루프를 도는데 50번 반복을 함. 머신러닝에서는 반복학습 필요.
    c, W_, b_, _ = sess.run([cost, W, b, optimizer], feed_dict={X: x_data, Y: y_data}) #코스트, w, b,옵티마이저 다 넣어주고 계산. 데이터셋을 넣어준다고해서 뒤에 피딩. x,y값에 각각 data를 넣어줌.위에서 정의한 paceholder에 들어가야할 공간.
    print('Step: %2d\t loss: %.2f\t' % (step, c)) #각스텝에 따라 코스트를 출력하라. 스텝에 따라 로스를 계산, 스텝에 따라 로스가 점점 떨어짐. > 학습이 잘되고 있음.

  print(sess.run(hypothesis, feed_dict={X: [[4, 4]]})) #검증해보기. 가설 x에 4,4를 넣으면 y가 어떻게 될까?

코드실행시 결과 

Step: 0 loss: 558.65 Step: 1 loss: 354.16 Step: 2 loss: 224.61 Step: 3 loss: 142.53 Step: 4 loss: 90.53 Step: 5 loss: 57.59 Step: 6 loss: 36.72 Step: 7 loss: 23.49 Step: 8 loss: 15.11 Step: 9 loss: 9.80 Step: 10 loss: 6.44 Step: 11 loss: 4.30 Step: 12 loss: 2.95 Step: 13 loss: 2.09 Step: 14 loss: 1.55 Step: 15 loss: 1.20 Step: 16 loss: 0.98 Step: 17 loss: 0.84 Step: 18 loss: 0.75 Step: 19 loss: 0.69 Step: 20 loss: 0.65 Step: 21 loss: 0.63 Step: 22 loss: 0.61 Step: 23 loss: 0.60 Step: 24 loss: 0.59 Step: 25 loss: 0.58 Step: 26 loss: 0.58 Step: 27 loss: 0.57 Step: 28 loss: 0.57 Step: 29 loss: 0.57 Step: 30 loss: 0.56 Step: 31 loss: 0.56 Step: 32 loss: 0.56 Step: 33 loss: 0.55 Step: 34 loss: 0.55 Step: 35 loss: 0.55 Step: 36 loss: 0.55 Step: 37 loss: 0.54 Step: 38 loss: 0.54 Step: 39 loss: 0.54 Step: 40 loss: 0.53 Step: 41 loss: 0.53 Step: 42 loss: 0.53 Step: 43 loss: 0.53 Step: 44 loss: 0.52 Step: 45 loss: 0.52 Step: 46 loss: 0.52 Step: 47 loss: 0.51 Step: 48 loss: 0.51 Step: 49 loss: 0.51 [[38.609562]]

 

초초초간단 Linear Regression 실습 (Keras)

import numpy as np
from tensorflow.keras.models import Sequential #텐서플로우에 케라스라는 클래스가 있는데 이 클래스의 모델에 시퀀셜 > 모델을 정의할때 쓰는 클래스
from tensorflow.keras.layers import Dense #케라스 클래스의 레이어스에는 덴스 > 위에서 썼던 가설을 구현
from tensorflow.keras.optimizers import Adam, SGD #케라스 클래스의 옵티마이저스에는 아담과 sgd를 임포트

x_data = np.array([[1], [2], [3]]) #케라스는 넘파이 어레이로 만들어짐. 
y_data = np.array([[10], [20], [30]])

model = Sequential([    #모델을 정의, 순차적으로 모델을 쌓아나가도록 한 구조. 선형회귀에서는 레이어가 하나.
  Dense(1) #출력이 하나다.
])

model.compile(loss='mean_squared_error', optimizer=SGD(lr=0.1)) #수식을 안쓰고 loss가 나오고, optimizer=sgd, 러닝레이트는 0.1 

model.fit(x_data, y_data, epochs=100) # 모델을 학습시키는 것을 fit이라고 함. epochs(반복학습하는 횟수) 복수형으로 쓰기!

 

Train on 3 samples
Epoch 1/100
3/3 [==============================] - 0s 36ms/sample - loss: 508.8659
Epoch 2/100
3/3 [==============================] - 0s 2ms/sample - loss: 8.0772
Epoch 3/100
3/3 [==============================] - 0s 2ms/sample - loss: 2.0003
Epoch 4/100
3/3 [==============================] - 0s 2ms/sample - loss: 1.8373
Epoch 5/100
3/3 [==============================] - 0s 1ms/sample - loss: 1.7492
Epoch 6/100
3/3 [==============================] - 0s 748us/sample - loss: 1.6661
Epoch 7/100
3/3 [==============================] - 0s 847us/sample - loss: 1.5869
Epoch 8/100
3/3 [==============================] - 0s 1ms/sample - loss: 1.5116
Epoch 9/100
3/3 [==============================] - 0s 751us/sample - loss: 1.4398
Epoch 10/100
3/3 [==============================] - 0s 749us/sample - loss: 1.3714
Epoch 11/100
3/3 [==============================] - 0s 680us/sample - loss: 1.3062
Epoch 12/100
3/3 [==============================] - 0s 677us/sample - loss: 1.2442
Epoch 13/100
3/3 [==============================] - 0s 697us/sample - loss: 1.1851
Epoch 14/100
3/3 [==============================] - 0s 777us/sample - loss: 1.1288
Epoch 15/100
3/3 [==============================] - 0s 772us/sample - loss: 1.0752
Epoch 16/100
3/3 [==============================] - 0s 682us/sample - loss: 1.0241
Epoch 17/100
3/3 [==============================] - 0s 836us/sample - loss: 0.9755
Epoch 18/100
3/3 [==============================] - 0s 680us/sample - loss: 0.9291
Epoch 19/100
3/3 [==============================] - 0s 894us/sample - loss: 0.8850
Epoch 20/100
3/3 [==============================] - 0s 746us/sample - loss: 0.8429
Epoch 21/100
3/3 [==============================] - 0s 690us/sample - loss: 0.8029
Epoch 22/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.7648
Epoch 23/100
3/3 [==============================] - 0s 739us/sample - loss: 0.7284
Epoch 24/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.6938
Epoch 25/100
3/3 [==============================] - 0s 686us/sample - loss: 0.6609
Epoch 26/100
3/3 [==============================] - 0s 749us/sample - loss: 0.6295
Epoch 27/100
3/3 [==============================] - 0s 720us/sample - loss: 0.5996
Epoch 28/100
3/3 [==============================] - 0s 801us/sample - loss: 0.5711
Epoch 29/100
3/3 [==============================] - 0s 894us/sample - loss: 0.5440
Epoch 30/100
3/3 [==============================] - 0s 773us/sample - loss: 0.5181
Epoch 31/100
3/3 [==============================] - 0s 698us/sample - loss: 0.4935
Epoch 32/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.4701
Epoch 33/100
3/3 [==============================] - 0s 762us/sample - loss: 0.4478
Epoch 34/100
3/3 [==============================] - 0s 769us/sample - loss: 0.4265
Epoch 35/100
3/3 [==============================] - 0s 712us/sample - loss: 0.4062
Epoch 36/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.3869
Epoch 37/100
3/3 [==============================] - 0s 717us/sample - loss: 0.3686
Epoch 38/100
3/3 [==============================] - 0s 784us/sample - loss: 0.3510
Epoch 39/100
3/3 [==============================] - 0s 837us/sample - loss: 0.3344
Epoch 40/100
3/3 [==============================] - 0s 841us/sample - loss: 0.3185
Epoch 41/100
3/3 [==============================] - 0s 901us/sample - loss: 0.3034
Epoch 42/100
3/3 [==============================] - 0s 854us/sample - loss: 0.2889
Epoch 43/100
3/3 [==============================] - 0s 2ms/sample - loss: 0.2752
Epoch 44/100
3/3 [==============================] - 0s 966us/sample - loss: 0.2622
Epoch 45/100
3/3 [==============================] - 0s 2ms/sample - loss: 0.2497
Epoch 46/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.2378
Epoch 47/100
3/3 [==============================] - 0s 988us/sample - loss: 0.2265
Epoch 48/100
3/3 [==============================] - 0s 875us/sample - loss: 0.2158
Epoch 49/100
3/3 [==============================] - 0s 910us/sample - loss: 0.2055
Epoch 50/100
3/3 [==============================] - 0s 876us/sample - loss: 0.1958
Epoch 51/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1865
Epoch 52/100
3/3 [==============================] - 0s 901us/sample - loss: 0.1776
Epoch 53/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1692
Epoch 54/100
3/3 [==============================] - 0s 976us/sample - loss: 0.1611
Epoch 55/100
3/3 [==============================] - 0s 989us/sample - loss: 0.1535
Epoch 56/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1462
Epoch 57/100
3/3 [==============================] - 0s 914us/sample - loss: 0.1392
Epoch 58/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1326
Epoch 59/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1263
Epoch 60/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.1203
Epoch 61/100
3/3 [==============================] - 0s 2ms/sample - loss: 0.1146
Epoch 62/100
3/3 [==============================] - 0s 860us/sample - loss: 0.1092
Epoch 63/100
3/3 [==============================] - 0s 749us/sample - loss: 0.1040
Epoch 64/100
3/3 [==============================] - 0s 1ms/sample - loss: 0.0990
Epoch 65/100
3/3 [==============================] - 0s 680us/sample - loss: 0.0943
Epoch 66/100
3/3 [==============================] - 0s 718us/sample - loss: 0.0899
Epoch 67/100
3/3 [==============================] - 0s 958us/sample - loss: 0.0856
Epoch 68/100
3/3 [==============================] - 0s 696us/sample - loss: 0.0815
Epoch 69/100
3/3 [==============================] - 0s 3ms/sample - loss: 0.0777
Epoch 70/100
3/3 [==============================] - 0s 729us/sample - loss: 0.0740
Epoch 71/100
3/3 [==============================] - 0s 742us/sample - loss: 0.0705
Epoch 72/100
3/3 [==============================] - 0s 944us/sample - loss: 0.0671
Epoch 73/100
3/3 [==============================] - 0s 630us/sample - loss: 0.0639
Epoch 74/100
3/3 [==============================] - 0s 703us/sample - loss: 0.0609
Epoch 75/100
3/3 [==============================] - 0s 881us/sample - loss: 0.0580
Epoch 76/100
3/3 [==============================] - 0s 710us/sample - loss: 0.0552
Epoch 77/100
3/3 [==============================] - 0s 960us/sample - loss: 0.0526
Epoch 78/100
3/3 [==============================] - 0s 589us/sample - loss: 0.0501
Epoch 79/100
3/3 [==============================] - 0s 798us/sample - loss: 0.0477
Epoch 80/100
3/3 [==============================] - 0s 737us/sample - loss: 0.0455
Epoch 81/100
3/3 [==============================] - 0s 758us/sample - loss: 0.0433
Epoch 82/100
3/3 [==============================] - 0s 621us/sample - loss: 0.0412
Epoch 83/100
3/3 [==============================] - 0s 698us/sample - loss: 0.0393
Epoch 84/100
3/3 [==============================] - 0s 698us/sample - loss: 0.0374
Epoch 85/100
3/3 [==============================] - 0s 757us/sample - loss: 0.0356
Epoch 86/100
3/3 [==============================] - 0s 586us/sample - loss: 0.0340
Epoch 87/100
3/3 [==============================] - 0s 702us/sample - loss: 0.0323
Epoch 88/100
3/3 [==============================] - 0s 678us/sample - loss: 0.0308
Epoch 89/100
3/3 [==============================] - 0s 3ms/sample - loss: 0.0293
Epoch 90/100
3/3 [==============================] - 0s 640us/sample - loss: 0.0279
Epoch 91/100
3/3 [==============================] - 0s 925us/sample - loss: 0.0266
Epoch 92/100
3/3 [==============================] - 0s 2ms/sample - loss: 0.0254
Epoch 93/100
3/3 [==============================] - 0s 577us/sample - loss: 0.0242
Epoch 94/100
3/3 [==============================] - 0s 727us/sample - loss: 0.0230
Epoch 95/100
3/3 [==============================] - 0s 633us/sample - loss: 0.0219
Epoch 96/100
3/3 [==============================] - 0s 637us/sample - loss: 0.0209
Epoch 97/100
3/3 [==============================] - 0s 643us/sample - loss: 0.0199
Epoch 98/100
3/3 [==============================] - 0s 658us/sample - loss: 0.0189
Epoch 99/100
3/3 [==============================] - 0s 964us/sample - loss: 0.0180
Epoch 100/100
3/3 [==============================] - 0s 621us/sample - loss: 0.0172
<tensorflow.python.keras.callbacks.History at 0x7f8863ba6c18>

테스트 데이터 예측하기

y_pred = model.predict([[5]]) #모델에 프레딕트라는 메소드를 쓰고 x값을 넣어주면 y_pred에 뭐가 나올지 예측할 수 있음. 

print(y_pred)

[[49.594902]]

 

1-9

https://colab.research.google.com/drive/1Jqn60TFKZ46STP6c5ZlQqQFBDLJyfaiK

 

1주차 실습 - 02. 캐글 선형회귀 실습

Colaboratory notebook

colab.research.google.com

1주차 과제

https://colab.research.google.com/drive/1Xl5ouU3hPEZwRFEy9mtveSq4FFdbiKz_

 

Google Colaboratory Notebook

Run, share, and edit Python notebooks

colab.research.google.com

 

'스파르타코딩클럽[AI트랙 3기] > 머신러닝 강의' 카테고리의 다른 글

[머신러닝]4주차  (0) 2022.10.17
머신러닝 221012 #3주차  (0) 2022.10.12
머신러닝 221011 #2주차  (0) 2022.10.11