Vous êtes sur la page 1sur 6

Sentiment analysis

Sentiment analysis (sometimes known as opinion min- in most statistical classication methods, the neutral class
ing or emotion AI) refers to the use of natural lan- is ignored under the assumption that neutral texts lie near
guage processing, text analysis, computational linguistics, the boundary of the binary classier, several researchers
and biometrics to systematically identify, extract, quan- suggest that, as in every polarity problem, three cate-
tify, and study aective states and subjective informa- gories must be identied. Moreover, it can be proven
tion. Sentiment analysis is widely applied to voice of the that specic classiers such as the Max Entropy[6] and
customer materials such as reviews and survey responses, the SVMs[7] can benet from the introduction of a neu-
online and social media, and healthcare materials for ap- tral class and improve the overall accuracy of the classi-
plications that range from marketing to customer service cation. There are in principle two ways for operating
to clinical medicine. with a neutral class. Either, the algorithm proceeds by
Generally speaking, sentiment analysis aims to determine rst identifying the neutral language, ltering it out and
the attitude of a speaker, writer, or other subject with then assessing the rest in terms of positive and negative
respect to some topic or the overall contextual polar- sentiments, or it builds a three-way classication in one
ity or emotional reaction to a document, interaction, or step.[8] This second approach often involves estimating
event. The attitude may be a judgment or evaluation (see a probability distribution over all categories (e.g. naive
appraisal theory), aective state (that is to say, the emo- Bayes classiers as implemented by Pythons NLTK kit).
tional state of the author or speaker), or the intended emo- Whether and how to use a neutral class depends on the
tional communication (that is to say, the emotional eect nature of the data: if the data is clearly clustered into
intended by the author or interlocutor). neutral, negative and positive language, it makes sense to
lter the neutral language out and focus on the polarity
between positive and negative sentiments. If, in contrast,
the data are mostly neutral with small deviations towards
1 Types positive and negative aect, this strategy would make it
harder to clearly distinguish between the two poles.

A basic task in sentiment analysis is classifying the po- A dierent method for determining sentiment is the use
larity of a given text at the document, sentence, or fea- of a scaling system whereby words commonly associated
ture/aspect levelwhether the expressed opinion in a with having a negative, neutral, or positive sentiment with
document, a sentence or an entity feature/aspect is pos- them are given an associated number on a 10 to +10
itive, negative, or neutral. Advanced, beyond polarity scale (most negative up to most positive) or simply from
sentiment classication looks, for instance, at emotional 0 to a positive upper limit such as +4. This makes it pos-
states such as angry, sad, and happy. sible to adjust the sentiment of a given term relative to its
environment (usually on the level of the sentence). When
One method that holds historical priority is the method of a piece of unstructured text is analyzed using natural lan-
Volcani and Fogel[1] , which remains in practice currently. guage processing, each concept in the specied environ-
The method examines individual words and phrases with ment is given a score based on the way sentiment words
respect to dierent emotional scales and presents syn- relate to the concept and its associated score.[9] This al-
onyms that can be used to increase or decrease the level lows movement to a more sophisticated understanding of
of evoked emotion in each scale. sentiment, because it is now possible to adjust the sen-
Other early work included Turney[2] , and Pang[3] who ap- timent value of a concept relative to modications that
plied dierent methods for detecting the polarity of prod- may surround it. Words, for example, that intensify, re-
uct reviews and movie reviews respectively. This work is lax or negate the sentiment expressed by the concept can
at the document level. One can also classify a documents aect its score. Alternatively, texts can be given a posi-
polarity on a multi-way scale, which was attempted by tive and negative sentiment strength score if the goal is to
Pang[4] and Snyder[5] among others: Pang and Lee[4] ex- determine the sentiment in a text rather than the overall
panded the basic task of classifying a movie review as polarity and strength of the text.[10]
either positive or negative to predict star ratings on ei-
ther a 3- or a 4-star scale, while Snyder[5] performed an
in-depth analysis of restaurant reviews, predicting ratings
for various aspects of the given restaurant, such as the
food and atmosphere (on a ve-star scale). Even though

1
2 3 EVALUATION

1.1 Subjectivity/objectivity identication used. Grammatical dependency relations are obtained by


deep parsing of the text.[25] Hybrid approaches leverage
This task is commonly dened as classifying a given text on both machine learning and elements from knowledge
(usually a sentence) into one of two classes: objective representation such as ontologies and semantic networks
or subjective.[11] This problem can sometimes be more in order to detect semantics that are expressed in a sub-
dicult than polarity classication.[12] The subjectivity tle manner, e.g., through the analysis of concepts that do
of words and phrases may depend on their context and not explicitly convey relevant information, but which are
an objective document may contain subjective sentences implicitly linked to other concepts that do so.[26]
(e.g., a news article quoting peoples opinions). More- Open source software tools deploy machine learning,
over, as mentioned by Su,[13] results are largely depen- statistics, and natural language processing techniques to
dent on the denition of subjectivity used when annotat- automate sentiment analysis on large collections of texts,
ing texts. However, Pang[14] showed that removing ob- including web pages, online news, internet discussion
jective sentences from a document before classifying its groups, online reviews, web blogs, and social media.[27]
polarity helped improve performance. Knowledge-based systems, on the other hand, make use
of publicly available resources, to extract the semantic
and aective information associated with natural lan-
1.2 Feature/aspect-based
guage concepts. Sentiment analysis can also be per-
formed on visual content, i.e., images and videos. One
It refers to determining the opinions or sentiments ex-
of the rst approach in this direction is SentiBank[28] uti-
pressed on dierent features or aspects of entities, e.g.,
lizing an adjective noun pair representation of visual con-
of a cell phone, a digital camera, or a bank.[15] A feature
tent.
or aspect is an attribute or component of an entity, e.g.,
the screen of a cell phone, the service for a restaurant, or A human analysis component is required in sentiment
the picture quality of a camera. The advantage of feature- analysis, as automated systems are not able to ana-
based sentiment analysis is the possibility to capture nu- lyze historical tendencies of the individual commenter,
ances about objects of interest. Dierent features can or the platform and are often classied incorrectly in
generate dierent sentiment responses, for example a ho- their expressed sentiment. Automation impacts approxi-
tel can have a convenient location, but mediocre food.[16] mately 23% of comments that are correctly classied by
This problem involves several sub-problems, e.g., iden- humans.[29] However, also humans often disagree, and it
tifying relevant entities, extracting their features/aspects, is argued that the inter-human agreement provides an up-
and determining whether an opinion expressed on each per bound that automated sentiment classiers can even-
feature/aspect is positive, negative or neutral.[17] The au- tually reach.[30]
tomatic identication of features can be performed with Sometimes, the structure of sentiments and topics is fairly
syntactic methods or with topic modeling.[18][19] More complex. Also, the problem of sentiment analysis is non-
detailed discussions about this level of sentiment analy- monotonic in respect to sentence extension and stop-word
sis can be found in Lius work.[20] substitution (compare THEY would not let my dog stay in
this hotel vs I would not let my dog stay in this hotel). To
address this issue a number of rule-based and reasoning-
2 Methods and features based approaches have been applied to sentiment anal-
ysis, including defeasible logic programming.[31] Also,
Existing approaches to sentiment analysis can be grouped there is a number of tree traversal rules applied to syn-
into three main categories: knowledge-based tech- tactic parse tree to extract [32][33]
the topicality of sentiment in
niques, statistical methods, and hybrid approaches. [21] open domain setting.
Knowledge-based techniques classify text by aect cat-
egories based on the presence of unambiguous aect
words such as happy, sad, afraid, and bored.[22] Some 3 Evaluation
knowledge bases not only list obvious aect words, but
also assign arbitrary words a probable anity to par-
ticular emotions.[23] Statistical methods leverage on el- The accuracy of a sentiment analysis system is, in prin-
ements from machine learning such as latent semantic ciple, how well it agrees with human judgments. This is
analysis, support vector machines, "bag of words" and Se- usually measured by precision and recall. However, [34] ac-
mantic Orientation Pointwise Mutual Information (See cording to research human raters typically agree 79%
Peter Turneys[2] work in this area). More sophisticated of the time (see Inter-rater reliability).
methods try to detect the holder of a sentiment (i.e., the Thus, a 70% accurate program is doing nearly as well as
person who maintains that aective state) and the target humans, even though such accuracy may not sound im-
(i.e., the entity about which the aect is felt).[24] To mine pressive. If a program were right 100% of the time,
the opinion in context and get the feature which has been humans would still disagree with it about 20% of the time,
opinionated, the grammatical relationships of words are since they disagree that much about any answer .[35] More
3

sophisticated measures can be applied, but evaluation of 5 Application in recommender sys-


sentiment analysis systems remains a complex matter.
For sentiment analysis tasks returning a scale rather than
tem
a binary judgement, correlation is a better measure than
precision because it takes into account how close the pre- See also: Recommender system
dicted value is to the target value.
For a recommender system, sentiment analysis has been
proven to be a valuable technique. A recommender sys-
tem aims to predict the preference to an item of a target
user. Mainstream recommender systems work on explicit
data set. For example, collaborative ltering works on the
rating matrix, and content-based ltering works on the
4 Web 2.0 meta-data of the items.
In many social networking services or E-commerce web-
sites, users can provide text review, comment or feed-
See also: Reputation management back to the items. These user-generated text provide a
rich source of users sentiment opinions about numerous
products and items. Potentially, for an item, such text
The rise of social media such as blogs and social networks can reveal both the related feature/aspects of the item and
has fueled interest in sentiment analysis. With the prolif-
the users sentiments on each feature.[42] The items fea-
eration of reviews, ratings, recommendations and other ture/aspects described in the text play the same role with
forms of online expression, online opinion has turned into
the meta-data in content-based ltering, but the former
a kind of virtual currency for businesses looking to mar- are more valuable for the recommender system. Since
ket their products, identify new opportunities and manage these features are broadly mentioned by users in their re-
their reputations. As businesses look to automate the pro- views, they can be seen as the most crucial features that
cess of ltering out the noise, understanding the conver- can signicantly inuence the users experience on the
sations, identifying the relevant content and actioning it item, while the meta-data of the item (usually provided
appropriately, many are now looking to the eld of senti- by the producers instead of consumers) may ignore fea-
ment analysis.[36] Further complicating the matter, is the tures that are concerned by the users. For dierent items
rise of anonymous social media platforms such as 4chan with common features, a user may give dierent senti-
and Reddit.[37] If web 2.0 was all about democratizing ments. Also, a feature of the same item may receive dif-
publishing, then the next stage of the web may well be ferent sentiments from dierent users. Users sentiments
based on democratizing data mining of all the content that on the features can be regarded as a multi-dimensional
is getting published.[38] rating score, reecting their preference on the items.
One step towards this aim is accomplished in research. Based on the feature/aspects and the sentiments extracted
Several research teams in universities around the world from the user-generated text, a hybrid recommender sys-
currently focus on understanding the dynamics of sen-
tem can be constructed.[43] There are two types of moti-
timent in e-communities through sentiment analysis.[39] vation to recommend a candidate item to a user. The rst
The CyberEmotions project, for instance, recently iden-
motivation is the candidate item have numerous common
tied the role of negative emotions in driving social net- features with the users preferred items,[44] while the sec-
works discussions.[40]
ond motivation is that the candidate item receives a high
The problem is that most sentiment analysis algorithms sentiment on its features. For a preferred item, it is rea-
use simple terms to express sentiment about a prod- sonable to believe that items with the same features will
uct or service. However, cultural factors, linguistic nu- have a similar function or utility. So, these items will also
ances and diering contexts make it extremely dicult likely to be preferred by the user. On the other hand, for
to turn a string of written text into a simple pro or con a shared feature of two candidate items, other users may
sentiment.[36] The fact that humans often disagree on the give positive sentiment to one of them while give neg-
sentiment of text illustrates how big a task it is for com- ative sentiment to another. Clearly, the high evaluated
puters to get this right. The shorter the string of text, the item should be recommended to the user. Based on these
harder it becomes. two motivations, a combination ranking score of simi-
Even though short text strings might be a problem, sen- larity and sentiment rating can be constructed for each
timent analysis within microblogging has shown that candidate item.[43]
Twitter can be seen as a valid online indicator of polit- Except the diculty of the sentiment analysis itself, ap-
ical sentiment. Tweets political sentiment demonstrates plying sentiment analysis on reviews or feedback also face
close correspondence to parties and politicians political the challenge of spam and biased reviews. One direction
positions, indicating that the content of Twitter messages of work is focused on evaluating the helpfulness of each
plausibly reects the oine political landscape.[41] review.[45] Review or feedback poorly written are hardly
4 7 REFERENCES

helpful for recommender system. Besides, a review can [9] Taboada, Maite; Brooke, Julian (2011). Lexicon-based
be designed to hinder sales of a target product, thus be methods for sentiment analysis. Computational Linguis-
harmful to the recommender system even it is well writ- tics. 37 (2): 272274. doi:10.1162/coli_a_00049.
ten.
[10] Thelwall, Mike; Buckley, Kevan; Paltoglou, Georgios;
Researchers also found that long and short form of user- Cai, Di; Kappas, Arvid (2010). Sentiment strength de-
generated text should be treated dierently. An interest- tection in short informal text. Journal of the American
ing result shows that short form reviews are sometimes Society for Information Science and Technology. 61 (12):
[46]
more helpful than long form, because it is easier to l- 25442558. doi:10.1002/asi.21416.
ter out the noise in a short form text. For the long form [11] Pang, Bo; Lee, Lillian (2008). 4.1.2 Subjectivity De-
text, the growing length of the text does not always bring tection and Opinion Identication. Opinion Mining and
a proportionate increase of the number of features or sen- Sentiment Analysis. Now Publishers Inc.
timents in the text.
[12] Mihalcea, Rada; Banea, Carmen; Wiebe, Janyce (2007).
Learning Multilingual Subjective Language via Cross-
Lingual Projections (PDF). Proceedings of the Associa-
6 See also tion for Computational Linguistics (ACL). pp. 976983.

Johan Bollen [13] Su, Fangzhong; Markert, Katja (2008). From Words to
Senses: a Case Study in Subjectivity Recognition (PDF).
Market sentiment Proceedings of Coling 2008, Manchester, UK.

[14] Pang, Bo; Lee, Lillian (2004). A Sentimental Educa-


tion: Sentiment Analysis Using Subjectivity Summariza-
7 References tion Based on Minimum Cuts. Proceedings of the Associ-
ation for Computational Linguistics (ACL). pp. 271278.
[1] USA Issued 7,136,877, Volcani, Yanon; & Fogel, David [15] Hu, Minqing; Liu, Bing (2004). Mining and Summariz-
B., System and method for determining and controlling ing Customer Reviews. Proceedings of KDD 2004.
the impact of text, published June 28, 2001.
[16] Cataldi, Mario; Ballatore, Andrea; Tiddi, Ilaria; Aufaure,
[2] Turney, Peter (2002). Thumbs Up or Thumbs Down? Marie-Aude (2013-06-22). Good location, terrible food:
Semantic Orientation Applied to Unsupervised Classi-
detecting feature sentiment in user-generated reviews.
cation of Reviews. Proceedings of the Association for
Social Network Analysis and Mining. 3 (4): 11491163.
Computational Linguistics. pp. 417424. arXiv:cs.LG/ doi:10.1007/s13278-013-0119-7. ISSN 1869-5450.
0212032 .
[17] Liu, Bing; Hu, Minqing; Cheng, Junsheng (2005).
[3] Pang, Bo; Lee, Lillian; Vaithyanathan, Shivakumar Opinion Observer: Analyzing and Comparing Opinions
(2002). Thumbs up? Sentiment Classication using Ma- on the Web. Proceedings of WWW 2005.
chine Learning Techniques. Proceedings of the Confer-
ence on Empirical Methods in Natural Language Process- [18] Zhai, Zhongwu; Liu, Bing; Xu, Hua; Jia, Peifa (2011-
ing (EMNLP). pp. 7986. 01-01). Huang, Joshua Zhexue; Cao, Longbing; Srivas-
tava, Jaideep, eds. Constrained LDA for Grouping Prod-
[4] Pang, Bo; Lee, Lillian (2005). Seeing stars: Exploit- uct Features in Opinion Mining. Lecture Notes in Com-
ing class relationships for sentiment categorization with puter Science. Springer Berlin Heidelberg. pp. 448459.
respect to rating scales. Proceedings of the Association doi:10.1007/978-3-642-20841-6_37. ISBN 978-3-642-
for Computational Linguistics (ACL). pp. 115124. 20840-9.
[5] Snyder, Benjamin; Barzilay, Regina (2007). Multiple [19] Titov, Ivan; McDonald, Ryan (2008-01-01). Modeling
Aspect Ranking using the Good Grief Algorithm. Pro- Online Reviews with Multi-grain Topic Models. Pro-
ceedings of the Joint Human Language Technology/North ceedings of the 17th International Conference on World
American Chapter of the ACL Conference (HLT-NAACL). Wide Web. WWW '08. New York, NY, USA: ACM:
pp. 300307. 111120. doi:10.1145/1367497.1367513. ISBN 978-1-
60558-085-2.
[6] Vryniotis, Vasilis (2013). The importance of Neutral Class
in Sentiment Analysis. [20] Liu, Bing (2010). Sentiment Analysis and Subjectivity
(PDF). In Indurkhya, N.; Damerau, F. J. Handbook of
[7] Koppel, Moshe; Schler, Jonathan (2006). The Im-
Natural Language Processing (Second ed.).
portance of Neutral Examples for Learning Sentiment.
Computational Intelligence 22. pp. 100109. CiteSeerX [21] Cambria, E; Schuller, B; Xia, Y; Havasi, C (2013).
10.1.1.84.9735 . New avenues in opinion mining and sentiment anal-
ysis. IEEE Intelligent Systems. 28 (2): 1521.
[8] Ribeiro, Filipe Nunes; Araujo, Matheus (2010). A doi:10.1109/MIS.2013.30.
Benchmark Comparison of State-of-the-Practice Senti-
ment Analysis Methods. Transactions on Embedded [22] Ortony, Andrew; Clore, G; Collins, A (1988). The Cogni-
Computing Systems. 9 (4). tive Structure of Emotions (PDF). Cambridge Univ. Press.
5

[23] Stevenson, Ryan; Mikels, Joseph; James, Thomas [38] Kirkpatrick, Marshall. ", ReadWriteWeb, 2009-04-15.
(2007). Characterization of the Aective Norms Retrieved on 2009-10-01.
for English Words by Discrete Emotional Categories
(PDF). Behavior Research Methods. 39 (4): 10201024. [39] CORDIS. Collective emotions in cyberspace (CYBER-
doi:10.3758/bf03192999. PMID 18183921. EMOTIONS)", European Commission, 2009-02-03. Re-
trieved on 2010-12-13.
[24] Kim, S. M.; Hovy, E. H. (2006). Identifying and
Analyzing Judgment Opinions. (PDF). Proceedings of [40] Condlie, Jamie. Flaming drives online social networks
the Human Language Technology / North American As- ", NewScientist, 2010-12-07. Retrieved on 2010-12-13.
sociation of Computational Linguistics conference (HLT-
[41] Tumasjan, Andranik; O.Sprenger, Timm; G.Sandner,
NAACL 2006). New York, NY.
Philipp; M.Welpe, Isabell (2010). Predicting Elections
[25] Dey, Lipika; Haque, S. K. Mirajul (2008). Opinion Min- with Twitter: What 140 Characters Reveal about Politi-
ing from Noisy Text Data. Proceedings of the second cal Sentiment. Proceedings of the Fourth International
workshop on Analytics for noisy unstructured text data, AAAI Conference on Weblogs and Social Media
p.83-90. [42] Tang, Huifeng, Songbo Tan, and Xueqi Cheng. A survey
[26] Cambria, E; Hussain, A (2012). Sentic Computing: Tech- on sentiment detection of reviews. Expert Systems with
niques, Tools, and Applications. Springer. Applications 36.7 (2009): 10760-10773.

[27] Akcora, Cuneyt Gurcan; Bayir, Murat Ali; Demirbas, [43] Jakob, Niklas, et al. Beyond the stars: exploiting free-
Murat; Ferhatosmanoglu, Hakan (2010). Identifying text user reviews to improve the accuracy of movie rec-
breakpoints in public opinion. SigKDD, Proceedings of ommendations. Proceedings of the 1st international CIKM
the First Workshop on Social Media Analytics. workshop on Topic-sentiment analysis for mass opinion.
ACM, 2009.
[28] Borth, Damian; Ji, Rongrong; Chen, Tao; Breuel,
Thomas; Chang, Shih-Fu (2013). Large-scale Visual [44] Hu, Minqing, and Bing Liu. Mining opinion features in
Sentiment Ontology and Detectors Using Adjective Noun customer reviews. AAAI. Vol. 4. No. 4. 2004.
Pairs. Proceedings of ACM Int. Conference on Multime- [45] Liu, Yang, et al. Modeling and predicting the helpfulness
dia. pp. 223232. of online reviews. Data mining, 2008. ICDM'08. Eighth
IEEE international conference on. IEEE, 2008.
[29] Case Study: Advanced Sentiment Analysis. Retrieved
18 October 2013. [46] Bermingham, Adam, and Alan F. Smeaton. Classifying
sentiment in microblogs: is brevity an advantage?. Pro-
[30] Mozeti, Igor; Grar, Miha; Smailovi, Jasmina (2016-
ceedings of the 19th ACM international conference on In-
05-05). Multilingual Twitter Sentiment Classication:
formation and knowledge management. ACM, 2010.
The Role of Human Annotators. PLOS ONE. 11 (5):
e0155036. doi:10.1371/journal.pone.0155036. ISSN
1932-6203. PMC 4858191 . PMID 27149621.

[31] Galitsky, Boris; McKenna, Eugene William. Sentiment


Extraction from Consumer Reviews for Providing Product
Recommendations. Retrieved 18 November 2013.

[32] Galitsky, Boris; Dobrocsi, Gabor; de la Rosa, Josep Llus


(2010). Inverting Semantic Structure Under Open Do-
main Opinion Mining. FLAIRS Conference.

[33] Galitsky, Boris; Chen, Huanjin; Du, Shaobin (2009). In-


version of Forum Content Based on Authors Sentiments
on Product Usability. AAAI Spring Symposium: Social
Semantic Web: Where Web 2.0 Meets Web 3.0: 3338.

[34] Ogneva, M. How Companies Can Use Sentiment Anal-


ysis to Improve Their Business. Mashable. Retrieved
2012-12-13.

[35] Roebuck, K. (2012-10-24). Sentiment Analysis: High-


impact Strategies - What You Need to Know: Denitions,
Adoptions, Impact, Benets, Maturity, Vendors. ISBN
9781743049457.

[36] Wright, Alex. Mining the Web for Feelings, Not Facts,
New York Times, 2009-08-23. Retrieved on 2009-10-01.

[37] Sentiment Analysis on Reddit. Retrieved 10 October


2014.
6 8 TEXT AND IMAGE SOURCES, CONTRIBUTORS, AND LICENSES

8 Text and image sources, contributors, and licenses


8.1 Text
Sentiment analysis Source: https://en.wikipedia.org/wiki/Sentiment_analysis?oldid=779117976 Contributors: The Anome, Fnielsen,
Ronz, Toreau, Phil Boswell, Bearcat, Alexf, Je3000, Tabletop, Qwertyus, Rjwilmsi, Sderose, Spencerk, Bgwhite, Crasshopper, Smack-
Bot, McGeddon, Ealdent, XenZenPez, Elagatis, Barryparr, Umerfarooq, Cs california, Thijs!bot, Maccess, Sprhodes, Headbomb, Kborer,
Nick Number, Goldenrowley, MER-C, Afrox, EFTHEMIA, Amorelli, Jodi.a.schneider, Eeera, Flowanda, Smattoon, Jussi Karlgren,
Skullers, Arnaudscher, Peculiar Light, BlastStu, Bgalitsky, Prasenjitmukherjee, Seesiva, The Thing That Should Not Be, Mabdulma,
PMDrive1061, PixelBot, Singularity42, DumZiBoT, XLinkBot, Jarrahbear, Copydiva, Addbot, Download, Yobot, Themfromspace, Yn-
gvadottir, AnomieBOT, Medieval evil, Rubinbot, Materialscientist, Citation bot, Bonnerclb, LilHelpa, Dithridge, Matteo004, Medieval
evil666, Mpbain, CorporateM, LucienBOT, Eliezerb, Perfectiix, I dream of horses, Profcrabbe, Yanirs, Axxxxman, Lotje, Erikcambria,
RjwilmsiBot, EmausBot, Mirajbwn, Jfcalvo, ZroBot, Kongkong115, Jahub, Gpsaila, GoldenGee, Scurtuv, 19S.137.93.171, Amz-
imti, EdoBot, ClueBot NG, Babaifalola, LamaHippoOtter, Cambriaerik, Rogynskyy, Ricardohz, Ngocminh.oss, BG19bot, Jwchong,
Wisewindow, Ray Beedle, Kyoakoa, Hypnotoad33, Gregory Yankelovich, Lannisters, BattyBot, ChrisGualtieri, Anonymous but Regis-
tered, Pintoch, Jamesx12345, Me, Myself, and I are Here, Julianharty, Prasadpingali, F.ozgur.catak, Andy Fou, Lizslome1990, Opus4760,
Astigitana, Jacquelyntwiki, Soxtherobot, NewsTeamAssemble, Fixuture, Thomasconn, Morganglick, Monkbot, KH-1, Oggiev, 1989, Hel-
pUsStopSpam, ShesLostControlAgain, Conservbrarian, Harish.pentapalli, StraboVarenius, Scottou, Sentiment32, Vladiatorr, Hugheson-
line, Rilinger, Montenegrodr, Bender the Bot, ChadW89, Watchful2017 and Anonymous: 154

8.2 Images
File:Lock-green.svg Source: https://upload.wikimedia.org/wikipedia/commons/6/65/Lock-green.svg License: CC0 Contributors: en:File:
Free-to-read_lock_75.svg Original artist: User:Trappist the monk

8.3 Content license


Creative Commons Attribution-Share Alike 3.0

Vous aimerez peut-être aussi