Académique Documents
Professionnel Documents
Culture Documents
Es una biblioteca de software de código abierto para el cálculo numérico que utiliza gráficos
de flujo de datos. Los nodos en el gráfico representan operaciones matemáticas, mientras
que los bordes del gráfico representan los conjuntos de datos multidimensionales
(tensores) comunicados entre ellos.
Lo que sucede dentro del código de aprendizaje automático es matemática, ayuda a
organizar esto de una manera que simplifica y mantiene el flujo computacional organizado.
Usaremos tflearn , una capa sobre Tensorflow y, por supuesto, Python. Como
siempre, usaremos la notebook iPython como herramienta para facilitar nuestro trabajo.
import nltk
stemmer = LancasterStemmer()
import numpy as np
import tflearn
import tensorflow as tf
import random
{"intents": [
{"tag": "greeting",
"patterns": ["Hi", "How are you", "Is anyone there?", "Hello", "Good day"],
"responses": ["Hello, thanks for visiting", "Good to see you again", "Hi there, how can I help?"],
"context_set": ""
},
{"tag": "goodbye",
"patterns": ["Bye", "See you later", "Goodbye"],
"responses": ["See you later, thanks for visiting", "Have a nice day", "Bye! Come back again
soon."]
},
{"tag": "thanks",
},
{"tag": "hours",
"patterns": ["What hours are you open?", "What are your hours?", "When are you open?" ],
"responses": ["We're open every day 9am-9pm", "Our hours are 9am-9pm every day"]
},
{"tag": "mopeds",
"patterns": ["Which mopeds do you have?", "What kinds of mopeds are there?", "What do you
rent?" ],
"responses": ["We rent Yamaha, Piaggio and Vespa mopeds", "We have Piaggio, Vespa and
Yamaha mopeds"]
},
{"tag": "payments",
"patterns": ["Do you take credit cards?", "Do you accept Mastercard?", "Are you cash only?" ],
"responses": ["We accept VISA, Mastercard and AMEX", "We accept most major credit cards"]
},
{"tag": "opentoday",
"patterns": ["Are you open today?", "When do you open today?", "What are your hours
today?"],
"responses": ["We're open every day from 9am-9pm", "Our hours are 9am-9pm every day"]
},
{"tag": "rental",
"patterns": ["Can we rent a moped?", "I'd like to rent a moped", "How does this work?" ],
},
{"tag": "today",
"patterns": ["today"],
"responses": ["For rentals today please call 1-800-MYMOPED", "Same-day rentals please call
1-800-MYMOPED"],
"context_filter": "rentalday"
words = []
classes = []
documents = []
ignore_words = ['?']
w = nltk.word_tokenize(pattern)
words.extend(w)
documents.append((w, intent['tag']))
classes.append(intent['tag'])
#Eliminar duplicados
classes = sorted(list(set(classes)))
training = []
output = []
bag = []
pattern_words = doc[0]
for w in words:
bag.append(1) if w in pattern_words else bag.append(0)
output_row = list(output_empty)
output_row[classes.index(doc[1])] = 1
training.append([bag, output_row])
random.shuffle(training)
training = np.array(training)
train_x = list(training[:,0])
train_y = list(training[:,1])
tf.reset_default_graph()
net = tflearn.fully_connected(net, 8)
net = tflearn.fully_connected(net, 8)
net = tflearn.regression(net)
model.save('model.tflearn')
def clean_up_sentence(sentence):
# Tokenizar el patrón
sentence_words = nltk.word_tokenize(sentence)
return sentence_words
# Conjunto de bolsa de palabras de retorno: 0 o 1 por cada palabra en la bolsa que existe
en la oración
# Tokenizar el patrón
sentence_words = clean_up_sentence(sentence)
# Bolsa de palabras
bag = [0]*len(words)
for s in sentence_words:
if w == s:
bag[i] = 1
if show_details:
return(np.array(bag))
print (p)
print (classes)
print(model.predict([p]))
import pickle
https://chatbotsmagazine.com/contextual-chat-bots-with-tensorflow-4391749d0077