๋ณธ๋ฌธ ๋ฐ”๋กœ๊ฐ€๊ธฐ

Study/Deep Learning7

[Deep learning] Class-Incremental Learning (LwF, PODNet) ๋ชจ๋“  ๋ฐ์ดํ„ฐ๋ฅผ ํ•œ ๋ฒˆ์— ์ €์žฅํ•˜๊ณ  ํ•™์Šตํ•˜๋Š” ๊ฒƒ์€ ๋น„ํšจ์œจ์ ์ด๊ณ  ์–ด์ฉŒ๋ฉด ๋น„ํ˜„์‹ค์ ์ด๋‹ค. ํŠนํžˆ, ๋ฐ์ดํ„ฐ๊ฐ€ ๋งค์šฐ ํฌ๊ฑฐ๋‚˜ ๋ฏผ๊ฐํ•œ ์ •๋ณด๋ฅผ ํฌํ•จํ•˜๋Š” ๊ฒฝ์šฐ์—๋Š” ๋”์šฑ! ๊ทธ๋ž˜์„œ ๊ณ ์•ˆ๋œ Class-Incremental Learning (CIL)์€ ๋ชจ๋ธ์ด ์‹œ๊ฐ„์ด ์ง€๋‚จ์— ๋”ฐ๋ผ ์ ์ง„์ ์œผ๋กœ ์ƒˆ๋กœ์šด ํด๋ž˜์Šค๋ฅผ ํ•™์Šตํ•˜๋Š” ํ•™์Šต๋ฒ•์ด๋‹ค. ์ „ํ†ต์ ์ธ ํ•™์Šต ๋ฐฉ์‹์—์„œ๋Š” ๋ชจ๋“  ํด๋ž˜์Šค๋ฅผ ํ•œ ๋ฒˆ์— ํ•™์Šตํ•˜์ง€๋งŒ, CIL์—์„œ๋Š” ๋ฐ์ดํ„ฐ๊ฐ€ ์ ์ง„์ ์œผ๋กœ ์ œ๊ณต๋˜๋ฉฐ ๋ชจ๋ธ์ด ์ƒˆ๋กœ์šด ํด๋ž˜์Šค๋ฅผ ํ•™์Šตํ•  ๋•Œ ์ด์ „์— ํ•™์Šตํ•œ ๋‚ด์šฉ์„ ์žŠ์ง€ ์•Š๋„๋ก ํ•˜๋Š” ๊ฒƒ์ด ์ค‘์š”ํ•˜๋‹ค. ์ด๋Š” CIL์ด ๋‹ค๋ฃจ๋Š” Catastrophic Forgetting ๋ฌธ์ œ๋ผ๊ณ ๋„ ๋ถˆ๋ฆฌ๋Š”๋ฐ, ์ด๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ• ์ค‘ LwF์™€ PODNet์„ ๊ฐ„๋‹จํžˆ ์†Œ๊ฐœํ•ด ๋ณด๊ฒ ๋‹ค.๋‘˜์„ ๊ตฌํ˜„ํ•œ ipynb ํŒŒ์ผ์„ ์•„๋ž˜ Github repo.. 2024. 7. 8.
[Deep learning] What is 'Style transfer'? (CVPR 2016) Image Style Transfer Using Convolutional Neural Networks (https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Gatys_Image_Style_Transfer_CVPR_2016_paper.pdf)(ECCV 2016)Perceptual Losses for Real-Time Style Transfer and Super-Resolution (https://arxiv.org/pdf/1603.08155.pdf)  โœจ Style Transfer๋ž€? ์ด๋ฏธ์ง€์˜ '์ปจํ…์ธ '๋Š” ๊ทธ๋Œ€๋กœ ๋‘๊ณ  '์Šคํƒ€์ผ'์„ ๋ณ€ํ™˜ํ•˜๋Š” ๊ธฐ์ˆ ์ด๋‹ค. ํŠนํžˆ 2016๋…„์— ๋ฐœํ‘œ๋œ ๋‘ ๋…ผ๋ฌธ, "Image Style Transfe.. 2024. 7. 7.
[Deep learning] Accelerating the Super-Resolution Convolutional Neural Network ๋…ผ๋ฌธ ๋ฆฌ๋ทฐ (ECCV 2016) Accelerating the Super-Resolution Convolutional Neural Network (https://arxiv.org/pdf/1608.00367.pdf)์ด ๋…ผ๋ฌธ์€ ๊ธฐ์กด์˜ Super Resolution CNN(SRCNN)์˜ ์—ฐ์‚ฐ์„ ๊ฐ€์†ํ™”ํ•˜๋Š” ๋ฐฉ๋ฒ•์„ ์—ฐ๊ตฌํ–ˆ๋‹ค. Super resolution task๋Š” ์ €ํ•ด์ƒ๋„ ์ด๋ฏธ์ง€๋ฅผ ์ž…๋ ฅ๋ฐ›์•„ ๊ณ ํ•ด์ƒ๋„๋กœ ๋ณต์›ํ•˜๋Š” ์ž‘์—…์ด๋‹ค.  1. ๊ธฐ์กด์˜ SRCNN SRCNN์€ Dong et al. (2014)์— ์˜ํ•ด ์ œ์•ˆ๋œ ๋ชจ๋ธ๋กœ,  ๊ธฐ๋ณธ์ ์œผ๋กœ ์„ธ ๊ฐœ์˜ ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋กœ ๊ตฌ์„ฑ๋˜์–ด Patch Extraction and Representation, Non-Linear Mapping, Reconstruction์˜ ๊ณผ์ •์„ ํ†ตํ•ด ์ด๋ฏธ์ง€ ํ•ด์ƒ๋„๋ฅผ ๋ณต์›.. 2024. 7. 7.
[Deep Learning] Attention, Seq2Seq, Transformer Vision Transformer๋ฅผ ์ดํ•ดํ•˜๊ธฐ ์œ„ํ•ด ํ•„์ˆ˜์ ์ธ ๊ฐœ๋…๋“ค์„ ํ•œ๋ฐ ์ •๋ฆฌํ•ด๋ณด๋ ค๊ณ  ํ•œ๋‹ค.์šฐ์„  RNN, LSTM, GRU์— ๊ด€ํ•œ ํฌ์ŠคํŒ…์€ ์•„๋ž˜! ์ด ๊ฐœ๋…์„ ์•Œ์•„์•ผ ์ดํ•ดํ•˜๊ธฐ ํŽธํ•˜๋‹ค. https://yoomimi.tistory.com/entry/RNN-LSTM-GRU [Deep Learning] RNN, LSTM, GRU ์ด์ •๋ฆฌโ˜… (+ํŒ์„œ)RNN(Recurrent Neural Network)์šฐ์„  ๋” ์ต์ˆ™ํ•œ CNN์—์„œ ์ถœ๋ฐœํ•ด๋ณด์ž. CNN์€ input(๋“ค)์„ ์ด์šฉํ•ด output์„ ์˜ˆ์ธกํ•˜๋Š”๋ฐ, ๊ทธ ๊ณผ์ •์—์„œ data๊ฐ€ ์žฌ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค. ๋‹น์—ฐํ•˜๋‹ค. CNN์€ input ํ•˜๋‚˜๋ฅผ ํ•œ๊บผ๋ฒˆ์— ๋„ฃ์–ด์ฃผ๊ธฐ ๋•Œ๋ฌธyoomimi.tistory.com   ๋“ค์–ด๊ฐ€๊ธฐ ์ „, ๊ธฐ๊ณ„ ๋ฒˆ์—ญ์˜ ๋ฐœ์ „ ๊ณผ์ •์„ ์•Œ๊ณ  ๊ฐ€๋ฉด ์ข‹๋‹ค.RNN > LSTM > .. 2024. 2. 19.
[Deep Learning] RNN, LSTM, GRU ์ด์ •๋ฆฌโ˜… (+ํŒ์„œ) RNN(Recurrent Neural Network)์šฐ์„  ๋” ์ต์ˆ™ํ•œ CNN์—์„œ ์ถœ๋ฐœํ•ด๋ณด์ž. CNN์€ input(๋“ค)์„ ์ด์šฉํ•ด output์„ ์˜ˆ์ธกํ•˜๋Š”๋ฐ, ๊ทธ ๊ณผ์ •์—์„œ data๊ฐ€ ์žฌ์‚ฌ์šฉ๋˜์ง€ ์•Š๋Š”๋‹ค. ๋‹น์—ฐํ•˜๋‹ค. CNN์€ input ํ•˜๋‚˜๋ฅผ ํ•œ๊บผ๋ฒˆ์— ๋„ฃ์–ด์ฃผ๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ํ•˜๋‚˜๋ฅผ ํ•œ๊บผ๋ฒˆ์—? ์ด๋ฏธ์ง€ ํ•˜๋‚˜๋ฅผ ๋„ฃ์„ ๋•Œ ๊ฐ๊ฐ์˜ ํ”ฝ์…€์„ ์ˆœ์„œ๋Œ€๋กœ ๋„ฃ์ง€ ์•Š๊ณ  ํ•œ๋ฒˆ์— Convolution layer๋ฅผ ๋งŒ๋‚˜๊ฒŒ ํ•ด๋ฒ„๋ฆฌ๋Š” ์ผ์„ ์ƒ์ƒํ•ด๋ณด๋ฉด ๋œ๋‹ค. ๋ฌผ๋ก  Convolution layer์˜ kernel size๋•Œ๋ฌธ์— ๋จผ์ € ์ฝํžˆ๋Š” ๋ถ€๋ถ„์ด ์กด์žฌํ•˜์ง€ ์•Š๋Š๋ƒ ์‹ถ์„ ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ ์ˆœ์„œ๊ฐ€ ์ค‘์š”ํ•œ๊ฐ€? ์ ˆ๋Œ€ ๊ทธ๋ ‡์ง€ ์•Š๋‹ค. ์ด๋ฏธ์ง€์—์„œ locality๊ฐ€ ์ค‘์š”ํ•œ ๊ฒƒ์€ sequence๊ฐ€ ์ค‘์š”ํ•œ ๊ฒƒ๊ณผ๋Š” ๋‹ค๋ฅธ ์˜๋ฏธ๋‹ค. RNN์€ sequence data(์‹œ๊ณ„์—ด dat.. 2024. 1. 12.
[AI] What is ResNet? ๐Ÿ“ What is ImageNet? ImageNet challenge๋Š” Computer vision์—์„œ ์œ ๋ช…ํ•œ challenge๋‹ค. ImageNet์€ ์ˆ˜๋งŽ์€ ์ด๋ฏธ์ง€๋“ค์˜ ์ง‘ํ•ฉ์„ ๋ฐ์ดํ„ฐ๋ฒ ์ด์Šค์— ์ €์žฅํ•ด๋‘” ๊ฒƒ์œผ๋กœ, ์ด๋ฅผ classificationํ•˜๋Š” task๋กœ ๋ชจ๋ธ์˜ ์„ฑ๋Šฅ์„ ๊ฒจ๋ค„์™”๋‹ค. (*Top-5 error๋ž€? 1๊ฐœ๋‹น prediction ๊ฒฐ๊ณผ๋ฅผ 5๊ฐœ ์œ ์ถ”ํ•  ์ˆ˜ ์žˆ๊ฒŒ ํ•˜๊ณ  ๊ทธ 5๊ฐœ ์ค‘ ์ •๋‹ต์ด ์žˆ์œผ๋ฉด accurate ํ•œ ๊ฒƒ์œผ๋กœ ๊ฐ„์ฃผ.) ์œ„์˜ ๊ทธ๋ž˜ํ”„๋ฅผ ๋ณด๋ฉด 2012๋…„ AlexNet์œผ๋กœ ์œ ์˜๋ฏธํ•œ error ๊ฐ์†Œ๊ฐ€ ๋‚˜ํƒ€๋‚˜๊ณ , 2015๋…„ ResNet์œผ๋กœ ์ฒ˜์Œ ์ธ๊ฐ„์„ ๋›ฐ์–ด๋„˜๋Š” ์„ฑ๋Šฅ์„ ๋ณด์ธ ๊ฒƒ์„ ์•Œ ์ˆ˜ ์žˆ๋‹ค. ๐Ÿ“ What is ResNet? ๊ทธ๋ ‡๋‹ค๋ฉด ResNet์€ ๋ฌด์—‡์ผ๊นŒ? He, Kaiming; Zhang, Xiangyu.. 2024. 1. 5.
[AI] 2023-2ํ•™๊ธฐ ๊ณต๋ถ€ํ•  ๋‚ด์šฉ ์š”์•ฝ [์ง„๋„] linear regression logistic regression decision tree ensemble learning (Bagging&Boosting) dimension reduction (PCA & LDA) neaural networks (Basics/Backpropagation/In pratice) CNNs(Convolutional Neaural Networks) clustering [์„ ์ˆ˜๊ณผ๋ชฉ] linear algebra, probability, calculus, optimization, data structure and Python [์ฐธ๊ณ ์ž๋ฃŒ] Pattern recognition and machine learning (C. Bishop) An introduction to statis.. 2023. 8. 14.