十年網(wǎng)站開(kāi)發(fā)經(jīng)驗(yàn) + 多家企業(yè)客戶 + 靠譜的建站團(tuán)隊(duì)
量身定制 + 運(yùn)營(yíng)維護(hù)+專(zhuān)業(yè)推廣+無(wú)憂售后,網(wǎng)站問(wèn)題一站解決
線性回歸第一個(gè)機(jī)器學(xué)習(xí)算法 - 單變量線性回歸
成都創(chuàng)新互聯(lián)公司專(zhuān)注于企業(yè)網(wǎng)絡(luò)營(yíng)銷(xiāo)推廣、網(wǎng)站重做改版、寧安網(wǎng)站定制設(shè)計(jì)、自適應(yīng)品牌網(wǎng)站建設(shè)、HTML5、商城網(wǎng)站開(kāi)發(fā)、集團(tuán)公司官網(wǎng)建設(shè)、成都外貿(mào)網(wǎng)站建設(shè)公司、高端網(wǎng)站制作、響應(yīng)式網(wǎng)頁(yè)設(shè)計(jì)等建站業(yè)務(wù),價(jià)格優(yōu)惠性價(jià)比高,為寧安等各大城市提供網(wǎng)站開(kāi)發(fā)制作服務(wù)。
"""
使用sklearn實(shí)現(xiàn)線性回歸
"""
import numpy as np
from sklearn.linear_model import LinearRegression
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
lin_reg = LinearRegression()
#fit方法就是訓(xùn)練模型的方法
lin_reg.fit(X, y)
#intercept 是截距 coef是參數(shù)
print(lin_reg.intercept_, lin_reg.coef_)
#預(yù)測(cè)
X_new = np.array([[0], [2]])
print(lin_reg.predict(X_new))
#encoding=utf-8
"""
線性回歸實(shí)現(xiàn)梯度下降的批處理(batch_gradient_descent )
"""
import numpy as np
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
X_b = np.c_[np.ones((100, 1)), X]
#print(X_b)
learning_rate = 0.1
#通常在做機(jī)器學(xué)習(xí)的時(shí)候,一般不會(huì)等到他收斂,因?yàn)樘速M(fèi)時(shí)間,所以會(huì)設(shè)置一個(gè)收斂次數(shù)
n_iterations = 1000
m = 100
#1.初始化theta, w0...wn
theta = np.random.randn(2, 1)
count = 0
#4. 不會(huì)設(shè)置閾值,之間設(shè)置超參數(shù),迭代次數(shù),迭代次數(shù)到了,我們就認(rèn)為收斂了
for iteration in range(n_iterations):
count += 1
#2. 接著求梯度gradient
gradients = 1.0/m * X_b.T.dot(X_b.dot(theta)-y)
#3. 應(yīng)用公式調(diào)整theta值, theta_t + 1 = theta_t - grad * learning_rate
theta = theta - learning_rate * gradients
print(count)
print(theta)
import numpy as np
import time
a = np.array([1,2,3,4])
a = np.random.rand(1000000)
b = np.random.rand(1000000)
tic = time.time()
c = np.dot(a,b)
toc = time.time()
print("Vectorized version:" + str(1000*(toc-tic)) + 'ms')
c = 0
tic = time.time()
for i in range(1000000):
c += a[i]*b[i]
toc = time.time()
print("for loop:" + str(1000*(toc-tic)) + 'ms')