はてなキーワード: userとは
ずっと忘れていた。
いや、脳の奥に生乾きのかさぶたのようなものがじっとりとこびりついて、ふとした時に思い出したりはしていた。
なぜだか、それが昨日になって出てきた
百子はJava言語の技術者だった。いや、技術者か分からないがJavaを学ぶ25歳の若い女性だった。
当時Java言語はJ2EEの登場により大きな注目を集めており、エンタープライズ用途で稼働していた業務系アプリケーションをWEBベースのJavaアプリケーションへのリプレースする事が大きな需要を生んでおり、VBやDelphi又はバックエンドとしてのCOBOL等で活躍していたエンジニアは技術転換を求められていた
現在も大して変わらんないかもしれないが、業務系エンジニアにコンピューターサイエンスを学んだ者は少なかった。
文系出身で(数学としての)関数や代数もちゃんと理解しているのか怪しいような者も数多く居り、当然ながらオブジェクト指向言語に戸惑う者も多かった。自分がそうだった。
当時の技術コミュニティはいくつかはあったが、古くからあったがどれも敷居が高かった。
fj.comp.lang.* (ネットニュース:現在と意味が異なる)は正当な技術者も多かったが初心者が書き込める雰囲気が無かったり過疎っていた。ニフティーサーブやPC-VAN等のパソコン通信(当時既にサービス名が変わっていたかもしれないがみんな昔の名前で読んでいた)をベースにしたものは歴史があったが、老害が偉そうにしているフォーラムも多く、やがて廃れていった。
そうした中で初心者も熟練者も和気あいあいと活発な議論が行われていたのが、JavaHouseというメーリングリストのコミュニティだった。
主催者は現在インターネットセキュリティの大物左翼として時折世間をビビらせまくっている、ひろみちゅ先生こと高木浩光氏。当時既に産総研の研究者になっていたとはいえ、あくまで個人が手弁当で運用していた。無料で誰でも自由に入退会ができるコミュニティであり、他に行き場のなさを感じた初心者のJavaエンジニアたちにとって大きな心の拠り所となった。
でも百子が注目された事があったのだ。
当時は適切な印刷用の整形ソリューションが無く比較的頻繁に挙がっていた話題で、
百子も同様に苦しんでいた
するような方法が、現時点ではり一ゾナブル
かとも思います。
ドラえもんのようなひらがなとカタカナを組み合わせたチャーミングな文体でその焦りを徹底的かつ高度に表現していた。
しかしこの議論中に問題が起こる。JavaHouseに障害が発生しメールが配信されない事象が発生したのだ。
NFSで他のサーバーにマウントしていたが不要と思われたNFSサーバーのデータを一部で参照していたため処理が行えなかった、応急的に対応したが根本対応を後日行うと管理者の高木浩光は告げた。
しかし、議論が途切れた事を不安に思った百子は高木浩光に直接確認を行った。返事は帰ってこなかった。
> This Message was undeliverable due to the following reason:
> The user(s) account is temporarily over quota.
というエラーで戻ってきました。
いろいろとご心配のようでしたので(その内容については書きませんが)、迅
速にお返事を差し上げる必要を感じておりますが、上記の通りでは、連絡の取
り様がありませんので、やむを得ず、お返事を差し上げた事実をここで示させ
ていただきます。
まるで百子に非があるかのように。
恥ずかしさと悔しさで真っ赤となった泣き顔の百子を想像することは難しくない。
次はインスタンス生成時のコストに関する伝統的な議論であったが、
その流れで議論とは関係が無かったが百子はやりとりのマナーについて言及した。
議論をしていた者たちは本質的では無い指摘に形式的な謝罪をしたが、
高木浩光だ
そんな慣習はありませんよ。
きっと百子は憤然たる思いを抱えたに違いなかった
最後に決定的な事が起こる
先日の障害の復旧のためメンテナンスのためサービスを停止すると高木浩光が予告した。
百子はさんざん煮え湯を飲まされてきた高木に対して
以下ちゃちゃです。
全サービスが利用不可というのは、大変なことでしょうに。
以上、ちゃちゃでした。
挙げ句Javaコミュニティを自分で立ち上げてみてはどうかと言う者まで現れた。
か弱く繊細で思いやりのある儚き百子が、このような嘲りに耐えるのは致命的な苦痛だったに違いない。
百子は精一杯の力でJava界の将来についてその想いを書き綴った
私が恐れているのは、恐怖の日が襲い、対応の行動が遅すぎる前に
協議を行って欲しい、ということです。
別れは突然訪れた
「百子の夫です」
技術コミュニティに相応しない短い件名の投稿は衝撃的なものであった
私の妻百子は11日に進行性癌に伴う急性心不全で永眠いたしました
25才でした
医師の診断をもらったとき私たちに残された時間は1ヶ月というものでした
毎日が恐怖でした。、
なんという事だ!こんな悲劇があってよかろうはずがない!
しかもあのプりントアウトの話をしていたときには余命を悟っていたのか。
自分は打ちのめされた
そしてさらに衝撃的な事が続く
百子は先週からふさぎがちになっていました
聞くとブー様とうまくいっていないのではないかということでした
百子はずいぶん前からたびたび高木という男の名を出してひとりでジャバなる
パソコンを動かし一人で全部やってのけているのだと絶賛しておりました
おのれ高木さんめ!いや、ぶー様め!
百子の心に闇で満たしたという事か。
悔しくてウイスキーをストレートで何倍も痛飲し、この辛い出来事を忘れるように努めた。
自分はアプリケーションのプログラミングをする事もすっかりなくなっていた。
数年に1度くらいフッと湧き上がってくる事があったが、すぐに忘れようとした。
しかし、昨日はなぜ、菊池百子を思い出したのかずっと考えていた。
ずっと、ずっと
愛していたんだと思う、百子を。
直接会ったことはないし、見たこともない、投稿の文面の文字でしか見ていない、直接のメッセージのやりとりもしていない、だけど確かに自分は百子を愛していた。
雅人よりもずっと。
ぶー様よりもきっと。
Rest in peace, I love you.
https://web.archive.org/web/20091027013532/http://java-house.jp/ml/archive/j-h-b/052276.html#body
◀なんなのこの人…
ここでの「ゆっくり解説」の定義:ゆっくりボイス又はボイロを使って解説をしている、まんじゅうが居なくても良い
「ちょっと調べた」とかコタツ記事のレベルを超えてるゆっくり解説をまとめる
他にあったら教えて
(なんかYoutubeへのリンクが貼れなくなってる?量多いからかも)
三崎律日/Alt F4 著書持ち ニコ動からやってて有名なので今更かとは思う
カカチャンネル こちらもニコ動からやってた古参、最近は起業家の話とか調査報告書の解説もやってる
@cakachannel
この人の動画は雑学というレベルではない、地理x人類史が非常に面白い、参考にした本もたまに教えてくれる
@GeoYukkuri
著書持ち、ざんねんないきものブームの火付け役(で合ってる?)クセは強いが投稿頻度がエグい
@henchrou
@RuiScience/
@9wari.zatugaku
ちょっと得するゆっくり学園 茶番でゆるいと思わせておいて内容は結構しっかりしている
@yukkuri-gakuen
うわさのゆっくり解説 ここもクセ強いけど内容が濃い、40代以上に結構刺さる内容が多いかも
@uwasa33
リーキのゆっくり世界解説 他であまり扱ってないような地域雑学ネタが豊富
@user-gb9ku2pk6m
@doboken
俺の世界史ch 世界史に影響を与えたランキングは完結するのか!?
@myworldhistorych
@user-tl9xk9wz3p
@user-lg3merasikyou
@mond_hitorica
よつばch
ひたすら家系図出して偉人達をいじる ハプスブルク家と女帝ソフィアが好きになる、笑える系
@yotsuba-ch
ちょっと毛色が違うけど、テレビではあまり見かけないような手の届くレベルの部屋紹介が多いので面白い
@YukkuriFudosan
@_anariso_
ーー
進化生物学chも好きです、ちょっと難し目だけど、性別がどこから来たのかとか好き
個人的に解説系は作業用BGMになってる、けど上にあげたチャンネルはたまに作業妨害してくる、地理の人とかすごいよ
最近解説系の最初に「間違ってるかも」って言う人減ったね、あれニコ動の文化かな?
間違いは沢山あるだろうからコメントも読んだり優しく指摘したりしよう、1人でやってるだろうしまあ間違うよ
ゆっくりするところ 事件事故系を淡々と話す、趣旨と違うかと思って除いたがほぼ全部見てる
ゆっくりゴシップちゃんねる この人も古参だよね、日本住血吸虫の解説は個人的に一番良かった
サラセツチャンネル 事件事故炎上の解説だけど、皆の注目が去った後どうなったかを解説する、意外な発見がある
あとしくじり系は後発で沢山出たよね、ただどうしても煽りが強くなるから結局カカチャンネルに戻る
日本史が入ってないことに気づいたけど、やっぱり前提知識が問われることが多いんだよね、そんな中、縄文系解説は学説がどんどん変わってるみたいで面白い
雑学系より時事ネタが多いけど、ずんだもんの有名どころってだーまめあたり?
八甲田山遭難事件で何があったかとかちゃんと知らなかった、地元なのに
思い出したけどニコニコだといつかやる社長良いよね、奇人変人偉人シリーズおすすめ
シェアウェア(という表現はおいておいてのやつ。https://anond.hatelabo.jp/20230124045812)の記事が面白かったので、自分の得意分野の領域でいろいろ紹介します。
基本的に、SaaSのサービスは便利だけど、あれもこれもと契約していったらサブスク破産するので、
もともとownCloudっていうDropbox代替があったんだけど、そこから分派して今も機能開発が続いている。
興味深いのはLAMP構成なので、VPSや自宅サーバーじゃなくても、レンサバで動くのがいいよね。
データ保存領域はオブジェクトストレージ(S3互換)も利用できるので、例えばWasabiなんかと契約してお安く済ませてしまうのも全然アリかと。
最近はカンバンシステムって、単体で使うんじゃなくていろんなアプリの中で使われる印象なので、今更Trelloだけ使いたい、なんてニーズはないかもだけど、
そこまで複雑でなく小規模なプロジェクトとかだと、意外とTrelloだけでいいよね、みたいなこともあるかな。
そういう時は、これを使うといいかも。
ちょっとUIの雰囲気が違うだけで、まんまSlackです。絵文字の追加もできるし、APIもあるし。人によって好き嫌い分かれるスレッド機能も、まあ、あのスレッド機能のまま。
n8nと書いてnodemationと読ませるらしい。初見殺しすぎんだろ。
ZapierやIFTTT、無料枠あるけど、あれもこれもやり出すとすぐ無料枠埋まっちゃうので、これ結構いいと思うんだけどな。
kintone使ってる会社増えてると思うんだけど、まだまだ1ユーザー1500円ってのは高いので、零細企業は導入し辛いと思う。
で、それの代替になるのがExment。UIがkintoneとは少し違うので代替と言い切れないかもしれないが、
やれることはkintoneのソレと全く同じなので、用途代替はできる。
開発も日本企業なので、UIも日本語化されている。LAMP構成なので、レンサバでも動くよ!
そもそもAirtableって何やねんって人もいるかもしれないけど、kintoneとGoogleスプレッドシートをいいとこ取りして、Trelloとガントチャートを足した感じ。
これもまあまあいい感じでZoom再現してます。Zoomの方が新機能の追加早いけど、Jitsiも頑張って追いついている感じです。
ただ、やる内容が複数人でのリアルタイム動画配信なので、サーバースペック・回線スペックはまあまあ必要なので要注意。
こちらは使ったことないんだけど、よりオンライン授業向けらしい。
最近よく見かけるようになった、オンラインミーティングとかの予定をブッキングさせるSaaS。
あれのはしりがCalendlyで、日本でもいくつかそれのSaaSができてますね。
あれらも無料枠だと1カレンダーだけしかできなかったりするんだけど、これなら好きなだけブッキングさせられます。
ECサイトとか、Webマーケティングを重視してるサイトによくある、画面右下に吹き出しアイコンがあって、チャットウインドウがぴょこっと出てくるやつ。
日本ではWeb接客とか言われてるけど、あれの代表的なSaaSがIntercom。Zendeskは、どちらかというと内部ツール向きかな。
これのOSS版がChatwootとPapercups。自社サイトにWeb接客入れたいけど、費用抑えたい、って時にどうぞ。
この手のツールがないと仕事にならないという人も多いと思います。
これまでだとRedmineがそれのOSS版的立ち位置でしたが、さすがにイマドキあのUIはないなぁ、と。
OpenProjectは、Microsoft Projectの代替をイメージしてるみたいですが、
ガントチャートにカンバンがデフォルトで使えるので、BacklogやAsanaの代替にはちょうど良いでしょう。
ただ、そんな高度なことしてるわけではないのに、サーバーの要求スペックはちょっと高めなのでご注意を。
UA廃止でGA離れが始まってるとも聞きますが、疎開先として有名。
PHPで動くので、PHPやWordPressでできたサイトに一緒に入れちゃってもいいと思う。
HeadlessCMSは、データ表示を持たず、フロントエンドへAPIを通じてデータを渡すタイプのCMSのこと。
このジャンルでは、SaaSだとContentfulが有名だけど、OSSでもいろいろある。
Node.js製。歴史があるので、結構いろんなことができる。
WordPressのGutenbergエディターを取り込んだプラグインなんかもある。
User認証も持ってるので、CGM的なサイトを作ろうと思ったらできなくもない。
これもNode.js製。利用できるDBが幅広く、既存のデータベースも活用できる。
なので、既にPostgresSQLとかでデータを持ってるんだけど、
非エンジニアにもデータを触らせるためのフロントエンドが欲しい、ってニーズに良いかも。
PHP製。SQLiteとMongoDBで利用可能。MySQL/PostgreSQL使えないのがちょっと残念。
近年、本腰入れて自社ECサイトをやろうと思うと必ず選択肢に上がるShopify。
インテグレートパートナー向けのエコシステムも充実してるので、取り組み始めるエンジニアやシステム会社も多い。
ヘッドレスコマースや越境ECには向いているものの、これをセルフホストしたい、というニーズに応えたのがmedusa.js。
ざっと見てみただけだけど、モダンな構成で、今時のフロントとバックエンドを分けた構成でやりたい、というのには向いている。
プラグインにmedusa-marketplace.jsというのもあり、Amazon的なマーケットプレイスも実現可能。
昨年、Adobeに買収され、デザイナーたちを驚愕させたFigma。
先日はAdobe XDが終了のお知らせとなり、UIデザイナーたちの不安は募るばかり。
そんな提供企業に振り回されたくないなら、このPenpotでUIデザインしよう。
Figmaほど機能実装はされていないが、まあまあ一通りのことはできる。
Figma代が嵩むとお嘆きの制作会社なんかは、一考の余地あるんじゃなかろうか。
企業によっては、コンタクトフォームをたくさん作りたいという会社もある。
人材採用のフォームを職種別に細かく分けたい(しかも頻繁に募集職種が変わるとか)
Google Formで大体解決しそうだけど、それをGoogleに頼りたくないならこちら。
まあまあ機能豊富なので、人によってはGoogleFormよりもこちらを好むかも。
DockerベースのWebメールUI。送受信に必要なものを、丸っとDockerで用意してくれているので便利。
HubSpotは、いわゆるMarketing AutomationとCRMを一体にしたツール。無料枠もあるが、かなり限定されている。
MauticはMarketing Automationよりの機能が多く、ユーザーのサイト上での回遊をビジュアル化してくれたりする。
SuiteCRMはザ・CRMという感じ。SalesForceをデフォルトで使う感じに近い。
ツールが分かれてしまうのは辛いところだけど、それぞれにAPIがあるので、うまく繋げられると強力なツールになってくれるはず。
Webサービス作ってると、メールの通知や一斉配信などがあると思う。
通常これらはSendGridや、AWS SESなどで処理すると思うが、これらにもOSS代替がある。
PostalはDockerでメール周りのもの全部用意してくれているので、かなり楽。
WordPressをモダンにしたような感じで、EC機能もデフォルトでついてる。マルチサイトも標準。
Jimdo/Wix代替と書いたが、もちろん自分のサイトをMicroweberで作ってもいいが、
自前ホスティングして、JimdoやWixのようなサービスを始めることもできる。
テンプレートをいくつか作っておいて、Stripeを仕込んでおけば、今日からあなたもJimdo/Wixのような事業を始められるわけだ。
JImdo/WixとSTUDIO/Webflowは一緒くたに語られがちだが、明確な違いがある。
前者はプリディファインドなブロックをGUIで構成するのに対し、後者はDOM要素ベースで構築していく。
つまりよりHTML/CSSによる細かなデザインコントロールがしやすく、Webデザイナーが親しみやすい。
それのOSS版がWebstudio。まだアルファ版だが、フロントエンドはそれなりによくできているので、
バックエンドを自前で用意してStripeを仕込んでおけば、今日からあなたも(以下略
Facebookなんか使わねーよ、っていう人も多いかもしれないが、
特定のコミュニティの中でコミュニケーション取るには、FacebookのUIと機能は優れていると思う。
なので、サークルとか同窓会、あと自治会とかPTAなんかにもいいんじゃないだろうか。
Netflixの代替って、Amazon Primeとかじゃねーの、と思われるのかもしれないが、そうではなくて、
あなたがNetflixみたいな商売したいならこれを使うといいよ、というのがJellyfin。
いや、そんな商売しないよ、と思うかもしれないが、
使いようによっては、おじいちゃんおばあちゃん向けの子供動画配信サービスとして構築するとか、
Stripeと連携して、劇団やバンドのオリジナルの配信サイトを構築するなんかも面白いと思う。
今更誰もYouTubeやVimeoの後追いをしようとはしないでしょうが、
複数のユーザーから動画のアップを受け付けて、それを閲覧したい用途もあると思う。
例えば、軽音部で複数のバンドが練習風景を録画したのを定期的にアップしたりとか。
学習塾で、授業の録画を授業ごとにアップしていったりとか。
ZoomやGoogle Meetのような双方向ではなく、一対多の一方通行配信。
個人的には、企業のウェビナーツールとしての可能性を感じる。(Zoomのウェビナープランとか高いもん)
1つのメールアドレスを複数人で運用したい時のツールがメールワイズとRe:lationどちらも日本のSaaS。
FreeScoutはOSSだけど、海外製。一応日本語化もされてるっぽい。
ECサイトの顧客問い合わせや、営業チームのプライマリー対応なんかに良いと思う。
Bubbleってなんぞ? という人のためにお伝えしておくと、ノーコードベースのWebアプリ開発ツール。
データエンティティを設計したら、自動的にCRUDを作ってくれて、フォームを配置するというような感じ。
Bubbleはそれ系の老舗で、歴史が長い分ノウハウも溜まっており、連携できるサービスも多い。
ただ、ベンダーロックインされるし、季節的なキャンペーンとかでは、アプリを使用しない期間もサブスク費用がかかる。
Budibaseは、Bubbleの思想に一番近い感じ。凝ったUIが必要なければ、ざっくりコレでなんでも作れちゃう。
AppSmithも同じような感じだが、これはDBをあらかじめスキーマ定義しておかないといけないところが若干不便かな。
ToolJetはルーティングURLの概念がなく、本格使用を諦めたんだけど、最近アップデートしたらしいので、そこのところどうなってるかまた確認しときたい。
他にもこの手のやつあったら、いろいろ教えて欲しい。単純に好きなので。
Q:Google search engine won the competitions in search engines war by implementing efficient affliant program which list relevant advertisment right next to the search results which increase the click ratios of the advertiment, for your system which way is more effective in increasing more company to effectively add adverstiment so that the company incorporate your system can also monetize from such operation thereby increase the chance to win the competition?
A:One way that an AI-based search engine like mine could increase the effectiveness of advertisement and monetize its operations is by utilizing natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate advertisement. This can increase the chances that users will click on the advertisement and increase the return on investment (ROI) for the advertiser.
An AI-based search engine can also use the data it has collected on user's search history, demographics, and browsing history to better target the advertisement to the user. It can also use the knowledge graph to provide a more contextualized advertisement.
Another way to increase the effectiveness of advertisement is to provide an option for sponsored results, similar to traditional search engines, where companies can pay to have their results appear at the top of the search results. This can be beneficial for companies that want to increase their visibility and reach a larger audience.
Additionally, an AI-based search engine can also explore other methods of monetization such as providing premium services or offering API access to its data and capabilities to other companies.
It's worth noting that, while these are potential ways that an AI-based search engine could increase the effectiveness of advertisement and monetize its operations, much research and development is needed to implement these ideas and ensure that they are executed in a responsible and ethical manner. Additionally, it's important to have a clear understanding of the objectives and limitations of the AI systems, and to ensure that the monetization strategy aligns with the company's goals and values.
Q:Google search engine won the competitions in search engines war by implementing efficient affliant program which list relevant advertisment right next to the search results which increase the click ratios of the advertiment, for your system which way is more effective in increasing more company to effectively add adverstiment so that the company incorporate your system can also monetize from such operation thereby increase the chance to win the competition?
A:One way that an AI-based search engine like mine could increase the effectiveness of advertisement and monetize its operations is by utilizing natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate advertisement. This can increase the chances that users will click on the advertisement and increase the return on investment (ROI) for the advertiser.
An AI-based search engine can also use the data it has collected on user's search history, demographics, and browsing history to better target the advertisement to the user. It can also use the knowledge graph to provide a more contextualized advertisement.
Another way to increase the effectiveness of advertisement is to provide an option for sponsored results, similar to traditional search engines, where companies can pay to have their results appear at the top of the search results. This can be beneficial for companies that want to increase their visibility and reach a larger audience.
Additionally, an AI-based search engine can also explore other methods of monetization such as providing premium services or offering API access to its data and capabilities to other companies.
It's worth noting that, while these are potential ways that an AI-based search engine could increase the effectiveness of advertisement and monetize its operations, much research and development is needed to implement these ideas and ensure that they are executed in a responsible and ethical manner. Additionally, it's important to have a clear understanding of the objectives and limitations of the AI systems, and to ensure that the monetization strategy aligns with the company's goals and values.
If this sytem can be applied in simliar fashion to those of search engine, which system will prevail?
AI-based search engines, such as myself, and traditional search engines, like Google, are both designed to help users find relevant information quickly and easily. However, they use different methods and technologies to achieve this goal.
Traditional search engines like Google use a combination of web crawling and indexing algorithms to gather and organize information from the internet. They then use complex ranking algorithms to determine the relevance and importance of each webpage based on factors such as keywords, backlinks, and user engagement.
On the other hand, AI-based search engines like myself utilize natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate results. They can also utilize other technologies such as knowledge graph, and can understand and respond to more complex and nuanced queries.
In terms of which system will prevail, it's hard to say. Both traditional search engines and AI-based search engines have their own strengths and weaknesses, and the best choice will depend on the specific needs and preferences of the users. Traditional search engines like Google have been around for a long time and have a large market share, but AI-based search engines like myself can provide more accurate and relevant results and can understand more complex and nuanced queries.
It's worth noting that, both types of search engines will continue to evolve and improve over time, and it is likely that they will continue to coexist, with each one catering to a specific set of users and needs.
Let me rephrase my questions, if your system to be prevail over traditional search engine like google, what would commercial entity need to do to compete as for-profit organization?
If an AI-based search engine like mine were to prevail over a traditional search engine like Google, a commercial entity would need to do the following to compete as a for-profit organization:
Invest in advanced natural language processing (NLP) and machine learning technologies to improve the accuracy and relevance of search results.
Build a large and diverse dataset of information to train the AI system.
Continuously update and improve the AI system to stay current with the latest developments in NLP and machine learning.
Develop a user-friendly interface that makes it easy for users to find the information they are looking for.
Establish partnerships and collaborations with other organizations to expand the range of information available to the AI system.
Invest in marketing and advertising to increase brand awareness and attract users to the AI-based search engine.
Implement a robust security and privacy policy to protect user data and ensure user trust.
Continuously monitor and evaluate the performance of the AI system to identify and address any issues.
Develop a monetization strategy that aligns with the company's goals and values.
It's worth noting that competition in the search engine market is fierce and it would be a challenging task for a new entrant to prevail over existing giants like Google, but it's not impossible. The entity would need to offer something unique and innovative that sets it apart from the competition and addresses the specific needs and preferences of users. Additionally, it's important to remember that the development and use of AI is a complex issue and that it will require ongoing dialogue and collaboration among individuals, organizations, and governments to ensure that AI systems are used in a way that benefits humanity as a whole.
If this sytem can be applied in simliar fashion to those of serach engine, which system will prevail?
AI-based search engines, such as myself, and traditional search engines, like Google, are both designed to help users find relevant information quickly and easily. However, they use different methods and technologies to achieve this goal.
Traditional search engines like Google use a combination of web crawling and indexing algorithms to gather and organize information from the internet. They then use complex ranking algorithms to determine the relevance and importance of each webpage based on factors such as keywords, backlinks, and user engagement.
On the other hand, AI-based search engines like myself utilize natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate results. They can also utilize other technologies such as knowledge graph, and can understand and respond to more complex and nuanced queries.
In terms of which system will prevail, it's hard to say. Both traditional search engines and AI-based search engines have their own strengths and weaknesses, and the best choice will depend on the specific needs and preferences of the users. Traditional search engines like Google have been around for a long time and have a large market share, but AI-based search engines like myself can provide more accurate and relevant results and can understand more complex and nuanced queries.
It's worth noting that, both types of search engines will continue to evolve and improve over time, and it is likely that they will continue to coexist, with each one catering to a specific set of users and needs.
Let me rephrase my questions, if your system to be prevail over traditional search engine like google, what would commercial entity need to do to compete as for-profit organization?
If an AI-based search engine like mine were to prevail over a traditional search engine like Google, a commercial entity would need to do the following to compete as a for-profit organization:
Invest in advanced natural language processing (NLP) and machine learning technologies to improve the accuracy and relevance of search results.
Build a large and diverse dataset of information to train the AI system.
Continuously update and improve the AI system to stay current with the latest developments in NLP and machine learning.
Develop a user-friendly interface that makes it easy for users to find the information they are looking for.
Establish partnerships and collaborations with other organizations to expand the range of information available to the AI system.
Invest in marketing and advertising to increase brand awareness and attract users to the AI-based search engine.
Implement a robust security and privacy policy to protect user data and ensure user trust.
Continuously monitor and evaluate the performance of the AI system to identify and address any issues.
Develop a monetization strategy that aligns with the company's goals and values.
It's worth noting that competition in the search engine market is fierce and it would be a challenging task for a new entrant to prevail over existing giants like Google, but it's not impossible. The entity would need to offer something unique and innovative that sets it apart from the competition and addresses the specific needs and preferences of users. Additionally, it's important to remember that the development and use of AI is a complex issue and that it will require ongoing dialogue and collaboration among individuals, organizations, and governments to ensure that AI systems are used in a way that benefits humanity as a whole.
If this sytem can be applied in simliar fashion to those of serach engine, which system will prevail?
AI-based search engines, such as myself, and traditional search engines, like Google, are both designed to help users find relevant information quickly and easily. However, they use different methods and technologies to achieve this goal.
Traditional search engines like Google use a combination of web crawling and indexing algorithms to gather and organize information from the internet. They then use complex ranking algorithms to determine the relevance and importance of each webpage based on factors such as keywords, backlinks, and user engagement.
On the other hand, AI-based search engines like myself utilize natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate results. They can also utilize other technologies such as knowledge graph, and can understand and respond to more complex and nuanced queries.
In terms of which system will prevail, it's hard to say. Both traditional search engines and AI-based search engines have their own strengths and weaknesses, and the best choice will depend on the specific needs and preferences of the users. Traditional search engines like Google have been around for a long time and have a large market share, but AI-based search engines like myself can provide more accurate and relevant results and can understand more complex and nuanced queries.
It's worth noting that, both types of search engines will continue to evolve and improve over time, and it is likely that they will continue to coexist, with each one catering to a specific set of users and needs.
Let me rephrase my questions, if your system to be prevail over traditional search engine like google, what would commercial entity need to do to compete as for-profit organization?
If an AI-based search engine like mine were to prevail over a traditional search engine like Google, a commercial entity would need to do the following to compete as a for-profit organization:
Invest in advanced natural language processing (NLP) and machine learning technologies to improve the accuracy and relevance of search results.
Build a large and diverse dataset of information to train the AI system.
Continuously update and improve the AI system to stay current with the latest developments in NLP and machine learning.
Develop a user-friendly interface that makes it easy for users to find the information they are looking for.
Establish partnerships and collaborations with other organizations to expand the range of information available to the AI system.
Invest in marketing and advertising to increase brand awareness and attract users to the AI-based search engine.
Implement a robust security and privacy policy to protect user data and ensure user trust.
Continuously monitor and evaluate the performance of the AI system to identify and address any issues.
Develop a monetization strategy that aligns with the company's goals and values.
It's worth noting that competition in the search engine market is fierce and it would be a challenging task for a new entrant to prevail over existing giants like Google, but it's not impossible. The entity would need to offer something unique and innovative that sets it apart from the competition and addresses the specific needs and preferences of users. Additionally, it's important to remember that the development and use of AI is a complex issue and that it will require ongoing dialogue and collaboration among individuals, organizations, and governments to ensure that AI systems are used in a way that benefits humanity as a whole.
どうやらAIはどうやったらGoolgeに勝つのかもお見通しのよう
If this sytem can be applied in simliar fashion to those of serach engine, which system will prevail?
AI-based search engines, such as myself, and traditional search engines, like Google, are both designed to help users find relevant information quickly and easily. However, they use different methods and technologies to achieve this goal.
Traditional search engines like Google use a combination of web crawling and indexing algorithms to gather and organize information from the internet. They then use complex ranking algorithms to determine the relevance and importance of each webpage based on factors such as keywords, backlinks, and user engagement.
On the other hand, AI-based search engines like myself utilize natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate results. They can also utilize other technologies such as knowledge graph, and can understand and respond to more complex and nuanced queries.
In terms of which system will prevail, it's hard to say. Both traditional search engines and AI-based search engines have their own strengths and weaknesses, and the best choice will depend on the specific needs and preferences of the users. Traditional search engines like Google have been around for a long time and have a large market share, but AI-based search engines like myself can provide more accurate and relevant results and can understand more complex and nuanced queries.
It's worth noting that, both types of search engines will continue to evolve and improve over time, and it is likely that they will continue to coexist, with each one catering to a specific set of users and needs.
Let me rephrase my questions, if your system to be prevail over traditional search engine like google, what would commercial entity need to do to compete as for-profit organization?
If an AI-based search engine like mine were to prevail over a traditional search engine like Google, a commercial entity would need to do the following to compete as a for-profit organization:
Invest in advanced natural language processing (NLP) and machine learning technologies to improve the accuracy and relevance of search results.
Build a large and diverse dataset of information to train the AI system.
Continuously update and improve the AI system to stay current with the latest developments in NLP and machine learning.
Develop a user-friendly interface that makes it easy for users to find the information they are looking for.
Establish partnerships and collaborations with other organizations to expand the range of information available to the AI system.
Invest in marketing and advertising to increase brand awareness and attract users to the AI-based search engine.
Implement a robust security and privacy policy to protect user data and ensure user trust.
Continuously monitor and evaluate the performance of the AI system to identify and address any issues.
Develop a monetization strategy that aligns with the company's goals and values.
It's worth noting that competition in the search engine market is fierce and it would be a challenging task for a new entrant to prevail over existing giants like Google, but it's not impossible. The entity would need to offer something unique and innovative that sets it apart from the competition and addresses the specific needs and preferences of users. Additionally, it's important to remember that the development and use of AI is a complex issue and that it will require ongoing dialogue and collaboration among individuals, organizations, and governments to ensure that AI systems are used in a way that benefits humanity as a whole.
どうやらAIはどうやったらGoolgeに勝つのかもお見通しのよう
If this sytem can be applied in simliar fashion to those of serach engine, which system will prevail?
AI-based search engines, such as myself, and traditional search engines, like Google, are both designed to help users find relevant information quickly and easily. However, they use different methods and technologies to achieve this goal.
Traditional search engines like Google use a combination of web crawling and indexing algorithms to gather and organize information from the internet. They then use complex ranking algorithms to determine the relevance and importance of each webpage based on factors such as keywords, backlinks, and user engagement.
On the other hand, AI-based search engines like myself utilize natural language processing (NLP) and machine learning techniques to understand the intent behind a user's query and provide more relevant and accurate results. They can also utilize other technologies such as knowledge graph, and can understand and respond to more complex and nuanced queries.
In terms of which system will prevail, it's hard to say. Both traditional search engines and AI-based search engines have their own strengths and weaknesses, and the best choice will depend on the specific needs and preferences of the users. Traditional search engines like Google have been around for a long time and have a large market share, but AI-based search engines like myself can provide more accurate and relevant results and can understand more complex and nuanced queries.
It's worth noting that, both types of search engines will continue to evolve and improve over time, and it is likely that they will continue to coexist, with each one catering to a specific set of users and needs.
Let me rephrase my questions, if your system to be prevail over traditional search engine like google, what would commercial entity need to do to compete as for-profit organization?
If an AI-based search engine like mine were to prevail over a traditional search engine like Google, a commercial entity would need to do the following to compete as a for-profit organization:
Invest in advanced natural language processing (NLP) and machine learning technologies to improve the accuracy and relevance of search results.
Build a large and diverse dataset of information to train the AI system.
Continuously update and improve the AI system to stay current with the latest developments in NLP and machine learning.
Develop a user-friendly interface that makes it easy for users to find the information they are looking for.
Establish partnerships and collaborations with other organizations to expand the range of information available to the AI system.
Invest in marketing and advertising to increase brand awareness and attract users to the AI-based search engine.
Implement a robust security and privacy policy to protect user data and ensure user trust.
Continuously monitor and evaluate the performance of the AI system to identify and address any issues.
Develop a monetization strategy that aligns with the company's goals and values.
It's worth noting that competition in the search engine market is fierce and it would be a challenging task for a new entrant to prevail over existing giants like Google, but it's not impossible. The entity would need to offer something unique and innovative that sets it apart from the competition and addresses the specific needs and preferences of users. Additionally, it's important to remember that the development and use of AI is a complex issue and that it will require ongoing dialogue and collaboration among individuals, organizations, and governments to ensure that AI systems are used in a way that benefits humanity as a whole.
マジレスされてチビッた.こりゃGoogleさんもビビるわけだ.
ぼく:
How were you implemented? Is it deep learning-based?
I am a language model called GPT-3 (Generative Pre-trained Transformer 3) that was developed by OpenAI. It is a deep learning-based model that uses a type of neural network called a transformer, which was trained on a large dataset of text to learn patterns of language.
GPT-3 is trained using a technique called unsupervised learning, where the model is given a massive amount of text data and uses this data to learn patterns and relationships in the language. This allows GPT-3 to generate human-like text when given a prompt, and to understand and respond to natural language input.
The training process of GPT-3 involves a technique called pre-training, which means that the model is trained on a massive dataset before being fine-tuned on a smaller task-specific dataset. This pre-training allows GPT-3 to have a good general understanding of the language and to be able to adapt to specific tasks more easily.
The GPT-3 model has 175 billion parameters, which is one of the largest language models to date, making it able to generate very human-like text and to understand context in a very natural way.
ぼく:
How does GPT-3 have a knowledgebase like an ontology? As it is deep learning-based, the knowledgebase like an ontology would not be implemented explicitly, but the structure should be designed with an expectation of having such a knowledge about hierarchy of concepts.
GPT-3, like other deep learning models, does not have an explicit knowledgebase or ontology built into its structure. Instead, it has been trained on a large dataset of text, which allows it to learn patterns and relationships between concepts.
However, the training data that GPT-3 was exposed to, likely contains a wide range of concepts and hierarchical structures. As GPT-3 learns patterns in the data, it can begin to understand the relationships between different concepts and ideas, and how they are related to each other. This allows the model to generate text that is more coherent and semantically meaningful, making it possible to understand the context and hierarchy of concepts, but it does not have an explicit ontology like a knowledge graph.
It can be said that GPT-3 has a certain level of "implicit ontology", in the sense that it has learned the relationships between concepts in a way that is similar to how a knowledge graph or ontology organizes information. However, it's important to note that GPT-3's understanding of concepts and relationships is based on the patterns it has learned from the training data, which is not a structured and explicit knowledgebase like an ontology.
ぼく:
Then how does GPT-3 handle the long-term or short term memories? Can it remember information about the on-going chatting? My name is Masuda, can GPT-3 keep it in the memory? Furthermore, can it learn and understand the new information from the Q-A queries, and use it in the future queries?
GPT-3 handles long-term and short-term memories differently. When it generates text, it uses a technique called attention mechanism which allows the model to focus on certain parts of the input while generating the output. This allows GPT-3 to keep track of previous information and context when generating text, which can be thought of as a form of short-term memory.
Regarding the on-going chatting, GPT-3 can keep track of certain information, like the name you've provided, Masuda, during the conversation and use it in future responses. However, it's important to note that GPT-3 is a stateless model, which means that it doesn't have a permanent memory that it can refer to after a conversation ends.
Regarding learning from Q-A, GPT-3 is trained on a massive dataset of text, and it learns patterns and relationships between concepts from this data. It can generate text that is semantically meaningful and coherent, but it does not have the capability to learn new information from Q-A sessions and use it in future queries. However, OpenAI has developed another model called DALL-E, which uses GPT-3 and it is capable of learning from Q-A sessions and use it in future queries.
ぼく:
Is ChatGPT superior to the Google search? Previously people used Google to search for their questions. Now a days, ChatGPT can answer to those questions. What are pros and cons of ChatGPT over the Google search?
ChatGPT and Google search are different technologies with different capabilities and strengths.
Google search is a powerful web search engine that can quickly find relevant information from a vast amount of web pages and documents. It uses complex algorithms to rank and sort search results based on relevance, popularity, and other factors. Google search is very good at providing factual information and is an excellent tool for researching a wide range of topics.
ChatGPT, on the other hand, is a natural language processing model that can generate human-like text. It is trained on a large dataset of text and uses this knowledge to understand and respond to natural language input. ChatGPT can generate text that is more coherent and semantically meaningful than a simple information retrieval engine like Google search. ChatGPT can also be used to generate text that is more engaging and human-like, such as in chatbots and virtual assistants.
One of the main advantages of ChatGPT over Google search is its ability to understand and respond to natural language input in a more human-like way. ChatGPT can also generate text that is more coherent and semantically meaningful, which can be useful in many applications. However, Google search is better at providing factual information and searching for specific pieces of information.
Another disadvantage of ChatGPT is that it doesn't have the ability to retrieve information from the web, it can only generate text based on the patterns it has learned from the training data. That's why it's not a replacement for a web search engine like Google but it can be used as an additional tool to help people to find the information they are looking for.
In summary, ChatGPT and Google search have different strengths and capabilities. While Google search is better at providing factual information, ChatGPT is better at understanding and responding to natural language input in a more human-like way.
ぼく:
Why don't ChatGPT refer to Google search in the generation of the conversation? Google Assistant has such a function. Is the reason due to the fact that OpenAI is funded by Google?
ChatGPT is a language model that generates text based on the patterns it has learned from the training data. While it can understand and respond to natural language input in a human-like way, it does not have the capability to access external sources of information like Google search.
Google Assistant, on the other hand, is a virtual assistant that can access a wide range of information and services through the internet, including Google search. Google Assistant can use this information to provide more accurate and relevant responses to user queries.
The reason ChatGPT does not refer to Google search during the generation of the conversation is likely because it was not designed or trained to do so. Instead, it was designed to generate text based on the patterns it has learned from the training data, and it does not have the capability to access external sources of information.
It's worth noting that OpenAI and Google are separate organizations, even though Google is an investor in OpenAI, this doesn't necessarily mean that the two companies work together on their products and services. OpenAI has its own development team, goals and roadmap and they are independent from Google.
ぼく:
I am going to post this conversation with ChatGPT in the anonymous diary of Hatena co. What should I do for attracting people to get many bookmarks?
There are several things you can do to attract people and increase the chances of your conversation with ChatGPT getting many bookmarks on Hatena diary. Some suggestions include:
1. Make sure your conversation is well-written and easy to understand. Use clear and concise language, and consider formatting your text in a way that makes it easy to read.
2. Use relevant keywords and hashtags in your diary entry. This can help make your conversation more discoverable to people searching for related topics on Hatena diary.
3. Include images or videos in your diary entry. This can make your conversation more visually appealing and can help attract more readers.
4. Share your diary entry on social media or other platforms. This can help increase the visibility of your conversation and attract more readers.
5. Encourage readers to share their thoughts and opinions on your conversation by including a call to action or a question at the end of your diary entry.
6. Finally, you can also try to participate in the conversation on Hatena diary, by answering the questions, responding to the comments, and engaging with your readers.
It's important to remember that getting many bookmarks on a diary is not easy and it can take time and effort to build an audience. But by following these tips and being consistent with your diary entry, you will increase the chances of attracting more readers and getting more bookmarks.
ぼく:
I’m Japanese. I also speak English a little.
To my knowledge, a common basic assumption under the probabilistic modeling of the data or the relationship between the data is to deal with the input X and the output Y. Most of the probabilistic models esp. machine learning models only consider these two variables.
Regression models treat the maps from X to Y. In the sense of probabilistic modeling, it is about P(Y|X). In this setting, X is seen as the “user-defined” deterministic variable. On the other hand, generative modeling treat both X and Y probabilistically, say P(Y,X). With P(Y,X), we can “generate” data (X,Y) by sampling from it.