Chat bot making process using Python 3 & TensorFlow
The document outlines the development process of a chatbot using Python 3 and TensorFlow, focusing on machine learning models and techniques for natural language processing. It covers various components of chatbot architecture, including data preprocessing, context analysis, response generation, and disintegration of input for improved understanding. Key challenges in creating effective Korean-specific chatbots and the integration of emotion and context-aware processing in conversations are also discussed.
Introduction by Jeongkyu Shin, discussing his background and the session focus on chat bot creation using Python and TensorFlow.
Overview of components needed for chat bot creation, including data sources, tools like TensorFlow and Python, emphasizing multi-modal learning.
Definition and components of chat bots, distinguishing traditional and machine learning-based models, along with common problems faced in Korean language processing.
Discussion on brain functions as a multi-modal context machine and architectural ideas for chat bot processing and generating responses.
Steps necessary for creating machine learning models including data preparation, training, and testing procedures.
Structure of bot models, importance of disintegration for morpheme analysis and the workflow for transforming sentences for bot training.
Training bot models using sentence sequences, incorporating emotion and context flags, and dealing with training challenges.
Technical specifications, serving of the bot through Telegram API, and implementation instructions.
Recap of the chat bot development process, future plans for improvements and expansions in the dataset used for training.
Thank you note and references for further reading and studies related to the topics discussed in the presentation.
2
I’M
Humble business man
LablupInc. : Coding education platform / technology
Open-source devotee
Textcube maintainer / KOSS Lab.
Play with some (open||hidden) projects / companies
Physicist / Neuroscientist
Adjunct Professor / Hanyang Univ. (Computer Science)
Studied information processing procedure in Brain / social systems
Ph.D in Statistical Physics (complex system)
Major in Physics / Computer science
신정규 / Jeongkyu Shin / @inureyes
4
> RUNME –-LOOP=4
Becamethe first man to get 2 official presenter
shirts in PyCON APAC 2016!
8.13.2016 (in Korean)
8.14.2016 (in English)
And at Google Developer Community
8.31.2016
And now.
Are you ready? (I’m not ready)*
*Parody of something. Never mind.
7
TODAY’S ENTREE: CHATBOT
Python 3
Twitter Korean Analyzer / Komoran with KoNLPy / pandas
TensorFlow
0.8 -> 0.9 -> 0.10RC0
And special sauce!
Special data with unique order
Special python program to organize / use the data!
Clipart* (c) thetomatos.com
8.
8
INGREDIENTS FOR TODAY'SRECIPE
Data
Test: FAS dataset (26GB)
Today: “Idolm@ster” series and etc.
Tools
TensorFlow + Python 3
Today’s insight
Multi-modal Learning models and model chaining
9.
9
I’m not surebut
I’ll try to explain
the
whole process I did
(in 30 minutes?)
Game screenshot* (c) CAVE
Forkcrane* (c) Iconix
10.
10
I’m not surebut
I’ll try to explain
the
whole process I did
(in 30 minutes?)
Game screenshot* (c) CAVE
Forkcrane* (c) Iconix
11.
11
And I assumethat
you already have
experience /
knowledge about
machine learning
and TensorFlow
Illustration *(c) marioandluigi97.deviantart.com
17
ONE DAY INSEOUL ITAEWON, 2013
All started with dinner talks of neuroscientists...
18.
18
WHAT IS CHATBOT?
“Chatting bots”
One of the
Oldest Human-Computer Interface (HCI) based machines
Challenging lexical topics
Interface: Text → Speech (vocal) →Brain-Computer Interface (BCI)
Commercial UI: Messengers!
22
Lexical
Output
CHAT-BOTS WITH MACHINELEARNING
Context
Analyzer
Natural
Language
Processor
Response
Generator
Decision
maker
Sentence
To
vector
converter
Deep-learning model
(RNN / sentence-to-sentence)
Knowledgebase
(useful with TF/IDF ask bots)
Per-user context
memory
Lexical
Input
Deep-learning model
SyntaxNet / NLU
(Natural Language
Understanding)
23.
23
PROBLEMS
Hooray! Deep-learning basedchat bots works well with Q&A scenario!
General problems
Inhuman: restricted for model training sets
Cannot "start" conversation
Cannot handle continuous conversational context and its changes
Korean-specific problems
Dynamic type-changes
Postpositions / conjunction (Josa hell)
28
BRAIN AS AMULTI-MODAL CONTEXT MACHINE
Selection
Functionally orthogonal connection types should
have complementary indicators for smaller dim. /
better presentation
Mixture
Final axes are weighted according to the context
density of mixtures
Weight function
Maximize the state difference in context space
Space transformation: likelihood estimation
30
INFORMATION PATHWAY DURING
CONVERSATION
Duringconversation:
3. Context recognition
1. Preprocessing
2. Send information
4. Spread / gather
processes to
determine answer
5. Send conceptual
response to parietal lobe
6. Postprocessing to
generate sentence
Clipart* (c) cliparts.co
31.
31
ARCHITECTURING
Separate the dots
Simplifyinginformation to context analyzer
Generates complex response using diverse models
Sentence generator
Grammar generator model
Simple word sequence to be complete sentence
Tone generator model
Change sentence sequence tones with specific tone
32.
32
IDEAS FROM STRUCTURE
Duringconversation:
3. Context parser
1. Disintegrator
2. Send information
4. Decision maker
using ML model
5. Send conceptual response
to Sentence generators
6. Postprocessing with
tone engine to
generate sentence
Grammar
engine
Clipart* (c) cliparts.co
33.
33
IDEAS FROM STRUCTURE
Multi-modalmodel
Disintegrator (to simplify sentence into morphemes)
Bot engine
Generates morpheme sequence
Grammar model
Make meaningful sentence from morpheme sequence
Tone model
Change some conjunction (eomi) / words of grammar model result
36
CREATING ML MODELS
Define
inputfunction
step function
evaluator
batch
Prepare
train dataset
test dataset
Runtime environment
Make
Estimator
Optimizer
Do
Training
Testing
Predicting
37.
37
CREATING ML MODELS
Define
inputfunction
step function
evaluator
batch
Prepare
train dataset
test dataset
Runtime environment
Make
Estimator
Optimizer
Do
Training
Testing
Predicting
38.
38
CREATING ML MODELS
Define
inputfunction
step function
evaluator
batch
Prepare
train dataset
test dataset
Runtime environment
Make
Estimator
Optimizer
Do
Training
Testing
Predicting
39.
39
CREATING ML MODELS
Define
inputfunction
step function
evaluator
batch
Prepare
train dataset
test dataset
Runtime environment
Make
Estimator
Optimizer
Do
Training
Testing
Predicting
41
Lexical
Output
Sentence generator
Context analyzer
+
Decisionmaker
MODEL CHAIN ORDER
Grammar
generator
Tone
generator
Disintegrator
Response generatorNLP + StV AI
Lexical
Input
Fragmented
text
sequence
Fragmented
text
sequence
(Almost)
Normal text
Text with
tones
Normal text
Semantic
sequence
43
DISINTEGRATOR
Rouzeta (https://shleekr.github.io/)
Finite State-basedKorean morphological analyzer (2 month ago!)
Great and fast / with Python wrapper! (Just 3 days ago!)
Twitter Korean analyzer
Compact and very fast / Can be easily adopted with KoNLP package
Komoran can be a good alternative (with enough time)
Komoran with ko_restoration package (https://github.com/lynn-hong/ko_restoration)
Increases both model training accuracy / speed
However, it is soooooooo slow... ( > 100 times longer execution time)
44.
44
DISINTEGRATOR
def get_training_data_by_disintegration(sentence):
disintegrated_sentence =konlpy.tag.Twitter().pos(sentence, norm=True, stem=True)
original_sentence = konlpy.tag.Twitter().pos(sentence)
inputData = []
outputData = []
is_asking = False
for w, t in disintegrated_sentence:
if t not in ['Eomi', 'Josa', 'Number', 'KoreanParticle', 'Punctuation']:
inputData.append(w+’/’+t)
for w, t in original_sentence:
if t not in ['Number', 'Punctuation']:
outputData.append(w)
if original_sentence[-1][1] == 'Punctuation' and original_sentence[-1][0] == "?":
if len(inputData) != 0 and len(outputData) != 0:
is_asking = True # To extract ask-response raw data
return ' '.join(inputData), ' '.join(outputData), is_asking
get_graining_data_by_disintegration
45.
45
SAMPLE DISINTEGRATOR
Super simpledisintegrator using twitter Korean analyzer (with KoNLPy interface)
나는 오늘 아침에 된장국을 먹었습니다.
[('나', 'Noun'), ('는', 'Josa'), ('오늘', 'Noun'), ('아침', 'Noun'), ('에', 'Josa'), ('된장국
', 'Noun'), ('을', 'Josa'), ('먹다', 'Verb'), ('.', 'Punctuation')]
나 오늘 아침 된장국 먹다
(venv) disintegrator » python test.py
Original : 나는 오늘 아침에 된장국을 먹었습니다.
Disintegrated for bot / grammar input : 나 오늘 아침 된장국 먹다
Training data for grammar model output: 나 는 오늘 아침 에 된장국 을 먹었 습니다
I ate miso soup in this morning.
I / this morning / miso soup / eat
46.
46
DATA RECYCLING /REUSING
Data recycling
Input of disintegrator → Output of grammar model
Output of disintegrator → Input of grammar model
original sentence (output for grammar model): 그럼 다시 한 번 프로듀서 께서 소신 표명 을 해주시 겠 어요 ?
Disintegrated sentence (input for grammar model): 그렇다 다시 하다 번 프로듀서 소신 표명 해주다
original sentence (output for grammar model): 저기 . 그러니까 .
Disintegrated sentence (input for grammar model): 저기 그러니까
original sentence (output for grammar model): 프로듀서 로서 아직 경험 은 부족하지 만 아무튼 열심히 하겠 습니다 .
Disintegrated sentence (input for grammar model): 프로듀서 로서 아직 경험 부족하다 아무튼 열심히 하다
original sentence (output for grammar model): 꿈 은 다 함께 톱 아이돌 !
Disintegrated sentence (input for grammar model): 꿈 다 함께 톱 아이돌
47.
47
CONVERSATION BOT MODEL
EmbeddingRNN Sequence-to-sequence model for chit-chat
For testing purpose: 4-layer to 8-layer swallow-learning (without input/output layer)
Use tensorflow.contrib.learn (formally sklearn package)
Simpler and easier than traditional (3 month ago?) handcrafted RNN
Of course, seq2seq, LSTMCell, GRUCell are all bundled!
According review papers, ML with > 10 layers are.
And it’s changing now... it became buzz word..
What is deep-learning model?
49
Context parser input
MEMORYAND EMOTION
Context memory as short-term memory
Memorizes current context (variable categories. Tested 4-type situations.)
Emotion engine as model
Understands past / current emotion of user
Use context memory / emotion engine as
First inputs of context parser model (for training / serving)
Context
memory
Emotion
engine
Input
Disintegrated sentence fragments
50.
50
CONVERSATIONAL CONTEXT LOCATOR
UsingSkip-gram and bidirectional 1-gram distribution in recent text
나는 오늘 아침에 된장국을 먹었습니다. => Disintegrate first
Bidirectional 1-gram set: {(나,아침),오늘}, {(오늘,된장국),아침}, {(아침,먹다),된장국}
Simplifying: {(<I>,아침),오늘}, {(오늘,<FOOD>),아침}, {(아침,<EAT>),<FOOD>}
Distribution: more simplification is needed
{(<I>,<TIME:DAY>), <TIME:DATE>}, {(<TIME:DATE>,<FOOD>), <TIME:DAY>},
{(<TIME:DAY>,<EAT>),<FOOD>}
나 오늘 아침 된장국 먹다
<I> 오늘 아침 <FOOD> <EAT>
<I> <TIME:DATE> <TIME:DAY> <FOOD> <EAT>
51.
51
CONVERSATIONAL CONTEXT LOCATOR
Trainingcontext space
Context-marked sentences (>20000)
Context: LIFE / CHITCHAT / SCIENCE / TASK
Prepare Generated 1-gram sets with context bit
Train RNN with 1-gram-2-vec
Matching context space
Input bd 1-gram sequence to context space
Take the dominator axis
52.
52
EMOTION ENGINE
Input: textsequence
Output: Emotion flag (6-type / 3bit)
Training set
Sentences with 6-type categorized emotion
Uses senti-word-net to extract emotion
6-axis emotional space by using WordVec model
Current emotion indicator: the most weighted emotion axis using WordVec model
Illustration *(c) http://ontotext.fbk.eu/
[0.95, 0.14, 0.01, 0.05, 0.92, 0.23] [1, 0, 0, 0, 0, 0] 0x01
index: 1 2 3 4 5 6
Position in senti-space:
53.
53
KNOWLEDGE ENGINE
Advanced topic:Not necessary for chit-chat bots
Searches the tokenized knowledge related to current conversation
Querying information
If target of conversation is query, use knowledge engine result as inputs of sentence
generator
If information fitness is so high, knowledge+template shows great result
That’s why information server bot will come to us soon at first.
Big topic: I'll not cover today.
57
GRAMMAR GENERATOR
Training set
Makesequence by disintegrating normal sentence
Remove postpositions / conjunction from sequence
Normalize nouns, verbs, adjectives
Model
3-layer Sequence-to-sequence model (for each encoder / decoder)
Estimator: ADAM optimizer with GRU cell
Adagrad with LSTM cell is also ok. In my case, ADAM+GRU works slightly better.
(Data size effect?)
Hidden feature size of GRU cell: 25, Embedding dimension for each word: 25.
58.
58
TONE GENERATOR
“Tones” tomake sentence to be more humanized
Every sentence has tones by speaker
The most important part to build the “pretty girl chat-bot”
Model
3-Layer sequence-to-sequence model
Almost same as grammar model (training set is different)
Can also be used to make chat bot speaking “dialects”
59.
59
TONE GENERATOR
Input: sentencewithout tones
Output: sentence with tones
Data: Normal sentences from various conversation sources
Training / test set
Remove tones from normal sentences
morpheme treating effectively removes tone from sentence.
60.
60
USEFUL TIPS
Sequence-to-sequence modelis inappropriate for Bot engine
Easily diverges during training
Of course, RNN training will not work.
in this case, input / output sequence relationship is too complex
Very hard to inject context-awareness to conversation
Response with context-aware need to ”generate” sentence not only from the ask,
but with context-aware data / knowledgebase / decision making process
Idea: input sequence into semantic bundle
It will work, I guess...
63
TRAINING BOT MODEL
Input
Disintegratedsentence sequence without postpositions / conjunction
Emotion flag (3 bits)
Context flag (extensible, appending sentence with special indicator / 2 bits)
Output
Answer sequence with nouns, pronouns, verbs, adjectives
Learning
Supervised learning (for simple communication model / replaces template)
Reinforcement learning (for emotion / context flag, on the fly production)
64.
64
TRAINING BOT MODEL
Trainingset
FAS log data ( http://antispam.textcube.org )
2006~2016 (from EAS data) / comments on weblogs / log size ~1TB (with spams)
Visited and crawled non-spam data, based on comment link (~26GB / MariaDB)
Original / reply pair as input / output
Preprocessing
Remove non-Korean characters from data
Data anonymization with id / name / E-mail information
65.
65
TRAINING GRAMMAR GENERATOR
Originaldata set
Open books without license problem ( https://ko.wikisource.org )
Comments are not a good dataset to learn grammar
Preprocessing
Input data: disintegrated sentence sequence
Output data: original sentence sequence
66.
66
TRAINING TONE GENERATOR
Originaldata set
Open books without license problem
Extract sentences wrapped with “
e.g. "집에서 온 편지유? 무슨 걱정이 생겼수?"
Preprocessing
Input data: sentence sequence without tone
e.g. “집에서 온 편지? 무슨 걱정 생기다?” (using morpheme analyzer)
Output data: original sentence sequence
70
DATA SOURCE
Subtitle (caption)files of many Animations!
Prototyping
Idol master conversation script (translated by online fans)
Field tests
Animations only with female characters
New data!
Communication script from Idol master 2 / OFA
Script from Idol master PS
72
DATA CONVERTER
.smi to.srt
Join
.srt files into one .txt
Remove
timestamps
and
blank lines
Remove
Logo / Ending
Song scripts
: Lines with
Japanese
Characters
and
the next lines
of them
Fetch
Character names
Nouns
Numbers
using
custom dictionary
(Anime characters,
Locations,
Specific nouns)
cat *.srt >> data.txt
subtitle_converter.py
*.smi file format is de facto standard of movie caption files in Korea
73.
73
Extract Conversations
Conversation data
forsequence-to-sequence
Bot model
Reformat
merge
sliced captions
into one line
if last_sentence [-1] == '?':
conversation.add((
last_sentence,
current_sentence))
Remove
Too short sentences
Duplicates
Sentence data
for
disintegrator
grammar model
tone model
Train
disintegrator
integrator with
grammar model
tone model
Train
bot model
subtitle_converter.py
pandas
pandas
74.
74
CONVENIENCES FOR DEMO
Simplebot engine
ask – response sentence similarity match engine (similar to template engine)
Merge grammar model with tone model
Grammar is not important to create anime character bot?
Loose parameter set
For fast convergence: data size is not big / too diverse
No knowledge engine
We just want to talk with him/her.
75.
I tensorflow/stream_executor/dso_loader.cc:108] successfullyopened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
total conversations: 4217
Transforming...
Total words, asked: 1062, response: 1128
Steps: 0
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had
negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 970
major: 5 minor: 2 memoryClockRate (GHz) 1.304
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.92GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device:
0, name: GeForce GTX 970, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1501 get requests,
put_count=1372 evicted_count=1000 eviction_rate=0.728863 and unsatisfied allocation rate=0.818787
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2405 get requests,
put_count=2388 evicted_count=1000 eviction_rate=0.41876 and unsatisfied allocation rate=0.432432
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 256 to 281
Bot training procedure (initialization)
76.
ask: 시 분시작 하다 이 것 대체 <REP>.
response (pred): NAME 해오다 <REP>.
response (gold): NAME 죄송하다.
ask: 쟤 네 <UNK> 사무소 주제 너무 <UNK> 하다 거 알다.
response (pred): NAME 해오다 <REP>.
response (gold): 아깝다 꼴 찌다 주목 다 받다
ask: <UNK> 아니다 <REP>.
response (pred): NAME 해오다 <REP>.
response (gold): 더 못 참다
ask: 이렇다 상태 괜찮다 <REP>.
response (pred): 이렇다 여러분 <REP>.
response (gold): NOUN 여러분.
ask: 기다리다 줄 수 없다 <REP>.
response (pred): 네 충분하다 기다리다 <REP>.
response (gold): 네 충분하다 기다리다.
ask: 넌 뭔가 생각 하다 거 있다 <REP>.
response (pred): 물론 이 <REP>.
response (gold): 물론 이.
Bot model training procedure (after first fitting)
Bot model training procedure (after 50 more fittings)
Trust me.
Your NVIDIA card
can not only play
Overwatch, but
this, too.
77.
I tensorflow/stream_executor/dso_loader.cc:108] successfullyopened CUDA library libcublas.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcudnn.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcufft.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcuda.so locally
I tensorflow/stream_executor/dso_loader.cc:108] successfully opened CUDA library libcurand.so locally
total line: 7496
Fitting dictionary for disintegrated sentence...
Fitting dictionary for recovered sentence...
Transforming...
Total words pool size: disintegrated: 3800, recovered: 5476
I tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:924] successful NUMA node read from SysFS had
negative value (-1), but there must be at least one NUMA node, so returning NUMA node zero
I tensorflow/core/common_runtime/gpu/gpu_init.cc:102] Found device 0 with properties:
name: GeForce GTX 970
major: 5 minor: 2 memory
ClockRate (GHz) 1.304
pciBusID 0000:01:00.0
Total memory: 4.00GiB
Free memory: 3.92GiB
I tensorflow/core/common_runtime/gpu/gpu_init.cc:126] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_init.cc:136] 0: YI
tensorflow/core/common_runtime/gpu/gpu_device.cc:806] Creating TensorFlow device (/gpu:0) -> (device: 0,
name: GeForce GTX 970, pci bus id: 0000:01:00.0)
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 1501 get requests,
put_count=1372 evicted_count=1000 eviction_rate=0.728863 and unsatisfied allocation rate=0.818787
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 100 to 110
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:244] PoolAllocator: After 2405 get requests,
put_count=2388 evicted_count=1000 eviction_rate=0.41876 and unsatisfied allocation rate=0.432432
I tensorflow/core/common_runtime/gpu/pool_allocator.cc:256] Raising pool_size_limit_ from 256 to 281
Grammar+Tone model training procedure (initialization)
78.
disintegrated: 올해 우리프로덕션 NOUN 의 활약 섭외 들어오다 <REP>.
recovered (pred): 그래서 저기 들 나요 <REP>.
recovered (gold): 올해 는 우리 프로덕션 도 NOUN 의 활약 으로 섭외 가 들어왔 답 니다.
disintegrated: 둘 다 왜 그렇다 <REP>.
recovered (pred): 어머 어머 아 <REP>.
recovered (gold): 둘 다 왜 그래.
disintegrated: 정말 우승 하다 것 같다 <UNK> .
recovered (pred): 정말 를 <REP>.
recovered (gold): 정말 우승할 것 같네 요.
disintegrated: 아 진짜 <REP>.
recovered (pred): 아 아 을까 <REP>.
recovered (gold): 아 진짜.
disintegrated: 호흡 딱 딱 맞다 <REP>.
recovered (pred): 무슨 을 <REP>.
recovered (gold): 호흡 이 딱 딱 맞 습니다.
disintegrated: 무슨 소리 NAME <REP>.
recovered (pred): 무슨 소리 음 <REP>.
recovered (gold): 무슨 소리 야 NAME.
disintegrated: 너 맞추다 또 넘어지다 거 잖다 <UNK> <UNK> <UNK> <UNK>.
recovered (pred): 너 겹친 또 넘어질 거 <REP>.
recovered (gold): 너 한테 맞춰 주 면 또 넘어질 거 잖아.
disintegrated: 중계 나름 신경 써주다 <REP>.
recovered (pred): 무대 에서도 을 신경 <REP>.
recovered (gold): 중계 에서도 나름 대로 신경 을 써줘.
Grammar+Tone model training procedure (after first fitting)
Grammar+Tone model training procedure (after 10 more fitting)
Grammar model
converges fast.
With GPU,
it converges much
faster.
84
USEFUL TIPS FORANIME CHARACTER BOT
DO NOT MIX different anime subtitles
Easily diverges during grammar model training. Strange. Huh?
Does it come from different translator’s tone? Need to check why.
Choose animation with extreme gender ratio
Very hard to divide gender-specific conversations from data
Tones of Japanese animation character are very different by speakers’ gender
Just choose boy-only / girl-only animation for easy data categorization
85.
85
AND TACKLES TODAY
FromTensorFlow 0.9RC, Estimator/TensorFlowEstimator.restore is removed
and not returned yet
I can create / train model but cannot load model with original code on TF 0.10RC.
Made some tricks for today’s demo
Auto-generated talk templates from bot
Response matcher (match ask sentence and return response from template pool)
Conversation dataset size is too small to create conversation model
Not smooth talks
Easily diverges. Train many, many models to get proper result.
94
SUMMARY
Today
Covers garage chatbot making procedure
Making chat bot with TensorFlow + Python 3
My contributions / insight to you
Multi-modal Learning models / structures for chat-bots
Idea to generate “data” for chat-bots
95.
95
AND NEXT...
Add IdolMaster 2 / OFA game conversation script to current dataset
Suggestion from Shin Yeaji (PyCon APAC staff) and Eunjin Hwang in this week
Train bot with some unknown (to me) animations.
Finish anonymization of FAS data and re-train bot with TensorFlow (almost finished!)
In fact, FAS data-based bot is run by Caffe. (http://caffe.berkeleyvision.org/)
This speak preparation encourages me to migrate my Caffe projects to TensorFlow
RL-based context parser with preprocessed data
More test and adopt rouzeta into Miki_bot engine
Test Seq2seq to bot engine?
By making input sequence into semantic bundle (in August)
Working but need more works…
96.
96
HOME ASSIGNMENT
If youare Loveliver*, you already know what to do.
Internet meme * (c) Marble Entertainment / inven.co.kr
Are you Lov..?
Idol M@ster?
*The fans of lovelive (another Japanese animation)
99
SELECTED REFERENCES
De Brabandere,B., Jia, X., Tuytelaars, T., & Van Gool, L. (2016, June 1). Dynamic Filter Networks. arXiv.org.
Noh, H., Seo, P. H., & Han, B. (2015, November 18). Image Question Answering using Convolutional Neural Network with Dynamic Parameter Prediction.
arXiv.org.
Andreas, J., Rohrbach, M., Darrell, T., & Klein, D. (2015, November 10). Neural Module Networks. arXiv.org.
Bengio, S., Vinyals, O., Jaitly, N., & Shazeer, N. (2015, June 10). Scheduled Sampling for Sequence Prediction with Recurrent Neural Networks. arXiv.org.
Jordan, M. I., & Mitchell, T. M. (2015). Machine learning: Trends, perspectives, and prospects. Science (New York, NY), 349(6245), 253–255.
http://doi.org/10.1126/science.aac4520
Bahdanau, D., Cho, K., & Bengio, Y. (2014, September 2). Neural Machine Translation by Jointly Learning to Align and Translate. arXiv.org.
Schmidhuber, J. (2014, May 1). Deep Learning in Neural Networks: An Overview. arXiv.org. http://doi.org/10.1016/j.neunet.2014.09.003
Zaremba, W., Sutskever, I., & Vinyals, O. (2014, September 8). Recurrent Neural Network Regularization. arXiv.org.
Mikolov, T., Chen, K., Corrado, G., & Dean, J. (2013, January 17). Efficient Estimation of Word Representations in Vector Space. arXiv.org.
Smola, A., & Vishwanathan, S. V. N. (2010). Introduction to machine learning.
Schmitz, C., Grahl, M., Hotho, A., & Stumme, G. (2007). Network properties of folksonomies. World Wide Web ….
Esuli, A., & Sebastiani, F. (2006). Sentiwordnet: A publicly available lexical resource for opinion mining. Presented at the Proceedings of LREC.