随机器学习兴起的Julia编程语言
Julia这个编程语言即有Python的开发效率,也有C的执行效率,是为数值运算设计的编程语言。Julia可以直接调用C,很多开源的C和Fortran库都集成到了Julia基础库。另外,它也有notebook。
Julia试图取代R, MATLAB, Octave等数值计算工具。其语法与其他科学计算语言相似。在许多情况下拥有能与编译型语言相媲美的性能。Julia的设计遵从三个原则,快、表达式丰富、动态语言。Julia的核心使用C语言编写,其他部分使用Julia本身编写。
目前这门编程语言在国内知名度不高,如果你在百度搜索Julia,第一页没有一个和Julia语言相关的条目,相反出现的是一个日本av star,这。。。
目前在机器学习领域最流行的编程语言还是Python,看一张图:
一个编程语言的兴衰和背后的社区有直接关系。如果一个编程语言社区强大,那么资源就多,各种库也多,那么用的人就多。Julia的社区貌似都是搞数值运算的,它的应用目前也只限制在这了,如果拿这个语言做Web(有一个库),那不累死。
本帖使用Julia演示一个手写数字识别,看看它的语法是否能和你对上眼。
Julia的几个机器学习库
- ScikitLearn.jl:类似Python的scikit-learn
- Mocha.jl
- TextAnalysis.jl
- MXNet.jl
- TensorFlow.jl:封装TensorFLow
安装Julia
julia源代码:https://github.com/JuliaLang/julia
http://julialang.org/downloads/
Ubuntu
$ sudo apt install gfortran $ sudo apt install julia
macOS
$ brew install Caskroom/cask/julia
手写数字识别
安装Mocha.jl:
julia> Pkg.add("Mocha")
# 或安装最新版 Pkg.clone("https://github.com/pluskid/Mocha.jl.git")
测试安装:
julia> Pkg.test("Mocha")
准备手写数字数据集:https://github.com/pluskid/Mocha.jl/tree/master/examples/mnist
代码:
# https://github.com/pluskid/Mocha.jl/blob/master/examples/mnist/mnist.jl
using Mocha
srand(12345678)
data_layer = AsyncHDF5DataLayer(name="train-data", source="data/train.txt", batch_size=64, shuffle=true)
conv_layer = ConvolutionLayer(name="conv1", n_filter=20, kernel=(5,5), bottoms=[:data], tops=[:conv])
pool_layer = PoolingLayer(name="pool1", kernel=(2,2), stride=(2,2), bottoms=[:conv], tops=[:pool])
conv2_layer = ConvolutionLayer(name="conv2", n_filter=50, kernel=(5,5), bottoms=[:pool], tops=[:conv2])
pool2_layer = PoolingLayer(name="pool2", kernel=(2,2), stride=(2,2), bottoms=[:conv2], tops=[:pool2])
fc1_layer = InnerProductLayer(name="ip1", output_dim=500, neuron=Neurons.ReLU(), bottoms=[:pool2], tops=[:ip1])
fc2_layer = InnerProductLayer(name="ip2", output_dim=10, bottoms=[:ip1], tops=[:ip2])
loss_layer = SoftmaxLossLayer(name="loss", bottoms=[:ip2,:label])
backend = DefaultBackend()
init(backend)
common_layers = [conv_layer, pool_layer, conv2_layer, pool2_layer, fc1_layer, fc2_layer]
net = Net("MNIST-train", backend, [data_layer, common_layers..., loss_layer])
exp_dir = "snapshots-$(Mocha.default_backend_type)"
method = SGD()
params = make_solver_parameters(method, max_iter=10000, regu_coef=0.0005,
mom_policy=MomPolicy.Fixed(0.9),
lr_policy=LRPolicy.Inv(0.01, 0.0001, 0.75),
load_from=exp_dir)
solver = Solver(method, params)
setup_coffee_lounge(solver, save_into="$exp_dir/statistics.jld", every_n_iter=1000)
# report training progress every 100 iterations
add_coffee_break(solver, TrainingSummary(), every_n_iter=100)
# save snapshots every 5000 iterations
add_coffee_break(solver, Snapshot(exp_dir), every_n_iter=5000)
# show performance on test data every 1000 iterations
data_layer_test = HDF5DataLayer(name="test-data", source="data/test.txt", batch_size=100)
acc_layer = AccuracyLayer(name="test-accuracy", bottoms=[:ip2, :label])
test_net = Net("MNIST-test", backend, [data_layer_test, common_layers..., acc_layer])
add_coffee_break(solver, ValidationPerformance(test_net), every_n_iter=1000)
solve(solver, net)
#Profile.init(int(1e8), 0.001)
#@profile solve(solver, net)
#open("profile.txt", "w") do out
# Profile.print(out)
#end
destroy(net)
destroy(test_net)
shutdown(backend)
如要转载,请保持本文完整,并注明作者@斗大的熊猫和本文原始地址: http://blog.topspeedsnail.com/archives/11069
作者暂无likerid, 赞赏暂由本网站代持,当作者有likerid后会全部转账给作者(我们会尽力而为)。Tips: Until now, everytime you want to store your article, we will help you store it in Filecoin network. In the future, you can store it in Filecoin network using your own filecoin.
Support author:
Author's Filecoin address:
Or you can use Likecoin to support author: